Test Report: QEMU_macOS 20053

                    
                      ee589ed5f2e38de21e277596fb8e32edfda5a06e:2024-12-05:37358
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.79
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.08
27 TestAddons/Setup 10.23
28 TestCertOptions 10.23
29 TestCertExpiration 195.39
30 TestDockerFlags 10.11
31 TestForceSystemdFlag 10.11
32 TestForceSystemdEnv 10.35
38 TestErrorSpam/setup 9.95
47 TestFunctional/serial/StartWithProxy 9.94
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.07
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 0.75
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.21
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.19
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.14
82 TestFunctional/parallel/CpCmd 0.31
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.3
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 113.73
100 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
101 TestFunctional/parallel/ServiceCmd/List 0.05
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
104 TestFunctional/parallel/ServiceCmd/Format 0.05
105 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/Version/components 0.05
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
118 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.33
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
127 TestFunctional/parallel/DockerEnv/bash 0.05
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 25.04
141 TestMultiControlPlane/serial/StartCluster 10.12
142 TestMultiControlPlane/serial/DeployApp 78.03
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.09
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 52.93
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.39
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 3.55
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 10.12
165 TestJSONOutput/start/Command 9.86
171 TestJSONOutput/pause/Command 0.09
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.34
197 TestMountStart/serial/StartWithMountFirst 10.13
200 TestMultiNode/serial/FreshStart2Nodes 9.95
201 TestMultiNode/serial/DeployApp2Nodes 81.74
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.09
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 50.84
209 TestMultiNode/serial/RestartKeepsNodes 8.68
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 2.28
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.05
217 TestPreload 10.14
219 TestScheduledStopUnix 10.12
220 TestSkaffold 12.61
223 TestRunningBinaryUpgrade 640.19
225 TestKubernetesUpgrade 18.54
239 TestStoppedBinaryUpgrade/Upgrade 592.11
249 TestPause/serial/Start 10.22
252 TestNoKubernetes/serial/StartWithK8s 9.86
253 TestNoKubernetes/serial/StartWithStopK8s 5.99
254 TestNoKubernetes/serial/Start 7.38
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.87
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.26
260 TestNoKubernetes/serial/StartNoArgs 5.36
262 TestNetworkPlugins/group/auto/Start 9.75
263 TestNetworkPlugins/group/kindnet/Start 9.93
264 TestNetworkPlugins/group/calico/Start 9.81
265 TestNetworkPlugins/group/custom-flannel/Start 9.88
266 TestNetworkPlugins/group/false/Start 10.02
267 TestNetworkPlugins/group/enable-default-cni/Start 9.84
268 TestNetworkPlugins/group/flannel/Start 9.79
269 TestNetworkPlugins/group/bridge/Start 10
270 TestNetworkPlugins/group/kubenet/Start 9.86
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.86
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 10.06
284 TestStartStop/group/no-preload/serial/DeployApp 0.1
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.26
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
292 TestStartStop/group/no-preload/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 9.83
295 TestStartStop/group/embed-certs/serial/DeployApp 0.1
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
299 TestStartStop/group/embed-certs/serial/SecondStart 5.26
300 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
301 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/embed-certs/serial/Pause 0.11
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.94
307 TestStartStop/group/newest-cni/serial/FirstStart 9.79
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
317 TestStartStop/group/newest-cni/serial/SecondStart 5.26
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (15.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-019000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-019000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.789214875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7dd4e3fb-194e-4999-9823-10ed9736b78c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-019000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"287290b7-123c-4042-b2bf-ad2fae43bb6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20053"}}
	{"specversion":"1.0","id":"6338f33a-25a6-485b-a1cf-5506356cfb23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig"}}
	{"specversion":"1.0","id":"99763d16-f9b0-4b5d-98c0-39c6eb0cba7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1631e3bd-38bc-4276-8606-9d0ee5924f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2cb6936f-7627-4102-9e1e-65669524229e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube"}}
	{"specversion":"1.0","id":"7cebf7a9-ac57-4f81-ae03-4d5045b8db5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"074e78c9-4e52-4473-aaab-ba8ce2df330d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e6241a7-cd9a-4c57-87f6-05b614ceef08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"e4aa3501-b58e-4a02-a47a-4b35db172155","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba9d3306-5e52-46db-a892-573a77bcb8e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-019000\" primary control-plane node in \"download-only-019000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"88e464fe-0fb3-44a3-acf6-bc04bdb96a94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a93e46b-791e-4eb6-8792-5c405f8a7028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320] Decompressors:map[bz2:0x14000803610 gz:0x14000803618 tar:0x14000803570 tar.bz2:0x14000803580 tar.gz:0x14000803590 tar.xz:0x140008035a0 tar.zst:0x140008035f0 tbz2:0x14000803580 tgz:0x14
000803590 txz:0x140008035a0 tzst:0x140008035f0 xz:0x14000803630 zip:0x14000803660 zst:0x14000803638] Getters:map[file:0x140017a8560 http:0x14000864190 https:0x140008641e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"401e5923-b5f5-4d57-b333-1784bc7e1696","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:27:20.110859    7923 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:27:20.111031    7923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:27:20.111034    7923 out.go:358] Setting ErrFile to fd 2...
	I1205 11:27:20.111037    7923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:27:20.111165    7923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	W1205 11:27:20.111262    7923 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20053-7409/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20053-7409/.minikube/config/config.json: no such file or directory
	I1205 11:27:20.112844    7923 out.go:352] Setting JSON to true
	I1205 11:27:20.131258    7923 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5209,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:27:20.131340    7923 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:27:20.137011    7923 out.go:97] [download-only-019000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:27:20.137145    7923 notify.go:220] Checking for updates...
	W1205 11:27:20.137207    7923 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 11:27:20.140131    7923 out.go:169] MINIKUBE_LOCATION=20053
	I1205 11:27:20.143167    7923 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:27:20.147985    7923 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:27:20.151102    7923 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:27:20.154151    7923 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	W1205 11:27:20.160147    7923 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 11:27:20.160454    7923 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:27:20.163069    7923 out.go:97] Using the qemu2 driver based on user configuration
	I1205 11:27:20.163088    7923 start.go:297] selected driver: qemu2
	I1205 11:27:20.163101    7923 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:27:20.163190    7923 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:27:20.166088    7923 out.go:169] Automatically selected the socket_vmnet network
	I1205 11:27:20.171590    7923 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1205 11:27:20.171687    7923 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:27:20.171736    7923 cni.go:84] Creating CNI manager for ""
	I1205 11:27:20.171774    7923 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 11:27:20.171836    7923 start.go:340] cluster config:
	{Name:download-only-019000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-019000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:27:20.176413    7923 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:27:20.179124    7923 out.go:97] Downloading VM boot image ...
	I1205 11:27:20.179139    7923 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1205 11:27:27.606620    7923 out.go:97] Starting "download-only-019000" primary control-plane node in "download-only-019000" cluster
	I1205 11:27:27.606645    7923 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:27:27.667782    7923 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:27:27.667805    7923 cache.go:56] Caching tarball of preloaded images
	I1205 11:27:27.668047    7923 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:27:27.673321    7923 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1205 11:27:27.673328    7923 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 11:27:27.755775    7923 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:27:34.575513    7923 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 11:27:34.575694    7923 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 11:27:35.270172    7923 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 11:27:35.270361    7923 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/download-only-019000/config.json ...
	I1205 11:27:35.270378    7923 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/download-only-019000/config.json: {Name:mkb66e6542a11c8b8c37524c92ae54d6c9226a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:27:35.270660    7923 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:27:35.270914    7923 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1205 11:27:35.818909    7923 out.go:193] 
	W1205 11:27:35.822952    7923 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320] Decompressors:map[bz2:0x14000803610 gz:0x14000803618 tar:0x14000803570 tar.bz2:0x14000803580 tar.gz:0x14000803590 tar.xz:0x140008035a0 tar.zst:0x140008035f0 tbz2:0x14000803580 tgz:0x14000803590 txz:0x140008035a0 tzst:0x140008035f0 xz:0x14000803630 zip:0x14000803660 zst:0x14000803638] Getters:map[file:0x140017a8560 http:0x14000864190 https:0x140008641e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1205 11:27:35.822974    7923 out_reason.go:110] 
	W1205 11:27:35.829910    7923 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:27:35.833885    7923 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-019000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-856000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-856000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.926491292s)

                                                
                                                
-- stdout --
	* [offline-docker-856000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-856000" primary control-plane node in "offline-docker-856000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-856000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:38:23.463273    9603 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:38:23.463451    9603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:38:23.463457    9603 out.go:358] Setting ErrFile to fd 2...
	I1205 11:38:23.463460    9603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:38:23.463589    9603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:38:23.464951    9603 out.go:352] Setting JSON to false
	I1205 11:38:23.484308    9603 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5872,"bootTime":1733421631,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:38:23.484398    9603 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:38:23.487906    9603 out.go:177] * [offline-docker-856000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:38:23.490791    9603 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:38:23.490810    9603 notify.go:220] Checking for updates...
	I1205 11:38:23.498810    9603 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:38:23.501769    9603 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:38:23.504766    9603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:38:23.507834    9603 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:38:23.510769    9603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:38:23.514214    9603 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:38:23.514272    9603 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:38:23.517794    9603 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:38:23.524798    9603 start.go:297] selected driver: qemu2
	I1205 11:38:23.524810    9603 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:38:23.524821    9603 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:38:23.526955    9603 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:38:23.529817    9603 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:38:23.531072    9603 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:38:23.531092    9603 cni.go:84] Creating CNI manager for ""
	I1205 11:38:23.531113    9603 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:38:23.531117    9603 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:38:23.531145    9603 start.go:340] cluster config:
	{Name:offline-docker-856000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-856000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:38:23.535536    9603 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:38:23.538846    9603 out.go:177] * Starting "offline-docker-856000" primary control-plane node in "offline-docker-856000" cluster
	I1205 11:38:23.546825    9603 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:38:23.546870    9603 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:38:23.546880    9603 cache.go:56] Caching tarball of preloaded images
	I1205 11:38:23.546970    9603 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:38:23.546975    9603 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:38:23.547045    9603 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/offline-docker-856000/config.json ...
	I1205 11:38:23.547055    9603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/offline-docker-856000/config.json: {Name:mk5a8c800c1f844c26ede59e10e5ef67bd2072b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:38:23.547348    9603 start.go:360] acquireMachinesLock for offline-docker-856000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:38:23.547392    9603 start.go:364] duration metric: took 37.75µs to acquireMachinesLock for "offline-docker-856000"
	I1205 11:38:23.547403    9603 start.go:93] Provisioning new machine with config: &{Name:offline-docker-856000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-856000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:38:23.547442    9603 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:38:23.555785    9603 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:38:23.571822    9603 start.go:159] libmachine.API.Create for "offline-docker-856000" (driver="qemu2")
	I1205 11:38:23.571860    9603 client.go:168] LocalClient.Create starting
	I1205 11:38:23.571940    9603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:38:23.571977    9603 main.go:141] libmachine: Decoding PEM data...
	I1205 11:38:23.571989    9603 main.go:141] libmachine: Parsing certificate...
	I1205 11:38:23.572043    9603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:38:23.572071    9603 main.go:141] libmachine: Decoding PEM data...
	I1205 11:38:23.572079    9603 main.go:141] libmachine: Parsing certificate...
	I1205 11:38:23.572452    9603 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:38:23.734951    9603 main.go:141] libmachine: Creating SSH key...
	I1205 11:38:23.868108    9603 main.go:141] libmachine: Creating Disk image...
	I1205 11:38:23.868116    9603 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:38:23.868284    9603 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2
	I1205 11:38:23.878387    9603 main.go:141] libmachine: STDOUT: 
	I1205 11:38:23.878422    9603 main.go:141] libmachine: STDERR: 
	I1205 11:38:23.878485    9603 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2 +20000M
	I1205 11:38:23.888128    9603 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:38:23.888150    9603 main.go:141] libmachine: STDERR: 
	I1205 11:38:23.888167    9603 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2
	I1205 11:38:23.888172    9603 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:38:23.888196    9603 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:38:23.888233    9603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7b:e9:2b:eb:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2
	I1205 11:38:23.890231    9603 main.go:141] libmachine: STDOUT: 
	I1205 11:38:23.890248    9603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:38:23.890267    9603 client.go:171] duration metric: took 318.405209ms to LocalClient.Create
	I1205 11:38:25.892325    9603 start.go:128] duration metric: took 2.344895208s to createHost
	I1205 11:38:25.892350    9603 start.go:83] releasing machines lock for "offline-docker-856000", held for 2.34497475s
	W1205 11:38:25.892360    9603 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:25.900979    9603 out.go:177] * Deleting "offline-docker-856000" in qemu2 ...
	W1205 11:38:25.910363    9603 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:25.910374    9603 start.go:729] Will try again in 5 seconds ...
	I1205 11:38:30.912594    9603 start.go:360] acquireMachinesLock for offline-docker-856000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:38:30.913183    9603 start.go:364] duration metric: took 480.791µs to acquireMachinesLock for "offline-docker-856000"
	I1205 11:38:30.913359    9603 start.go:93] Provisioning new machine with config: &{Name:offline-docker-856000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-856000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:38:30.913646    9603 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:38:30.919457    9603 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:38:30.969192    9603 start.go:159] libmachine.API.Create for "offline-docker-856000" (driver="qemu2")
	I1205 11:38:30.969275    9603 client.go:168] LocalClient.Create starting
	I1205 11:38:30.969462    9603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:38:30.969555    9603 main.go:141] libmachine: Decoding PEM data...
	I1205 11:38:30.969572    9603 main.go:141] libmachine: Parsing certificate...
	I1205 11:38:30.969663    9603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:38:30.969723    9603 main.go:141] libmachine: Decoding PEM data...
	I1205 11:38:30.969738    9603 main.go:141] libmachine: Parsing certificate...
	I1205 11:38:30.970650    9603 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:38:31.142772    9603 main.go:141] libmachine: Creating SSH key...
	I1205 11:38:31.285141    9603 main.go:141] libmachine: Creating Disk image...
	I1205 11:38:31.285149    9603 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:38:31.285382    9603 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2
	I1205 11:38:31.295606    9603 main.go:141] libmachine: STDOUT: 
	I1205 11:38:31.295625    9603 main.go:141] libmachine: STDERR: 
	I1205 11:38:31.295687    9603 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2 +20000M
	I1205 11:38:31.304073    9603 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:38:31.304088    9603 main.go:141] libmachine: STDERR: 
	I1205 11:38:31.304099    9603 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2
	I1205 11:38:31.304104    9603 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:38:31.304115    9603 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:38:31.304160    9603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:f9:ba:20:8c:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/offline-docker-856000/disk.qcow2
	I1205 11:38:31.305917    9603 main.go:141] libmachine: STDOUT: 
	I1205 11:38:31.305931    9603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:38:31.305943    9603 client.go:171] duration metric: took 336.657375ms to LocalClient.Create
	I1205 11:38:33.308106    9603 start.go:128] duration metric: took 2.394451417s to createHost
	I1205 11:38:33.308178    9603 start.go:83] releasing machines lock for "offline-docker-856000", held for 2.394985625s
	W1205 11:38:33.308534    9603 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-856000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-856000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:33.322286    9603 out.go:201] 
	W1205 11:38:33.326365    9603 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:38:33.326402    9603 out.go:270] * 
	* 
	W1205 11:38:33.329044    9603 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:38:33.339231    9603 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-856000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-05 11:38:33.356782 -0800 PST m=+673.330247043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-856000 -n offline-docker-856000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-856000 -n offline-docker-856000: exit status 7 (69.889667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-856000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-856000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-856000
--- FAIL: TestOffline (10.08s)

                                                
                                    
x
+
TestAddons/Setup (10.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-656000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-656000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.232400125s)

                                                
                                                
-- stdout --
	* [addons-656000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-656000" primary control-plane node in "addons-656000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-656000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:27:45.567956    8003 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:27:45.568113    8003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:27:45.568117    8003 out.go:358] Setting ErrFile to fd 2...
	I1205 11:27:45.568120    8003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:27:45.568293    8003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:27:45.569516    8003 out.go:352] Setting JSON to false
	I1205 11:27:45.587192    8003 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5234,"bootTime":1733421631,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:27:45.587275    8003 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:27:45.591323    8003 out.go:177] * [addons-656000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:27:45.598286    8003 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:27:45.598342    8003 notify.go:220] Checking for updates...
	I1205 11:27:45.605266    8003 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:27:45.608193    8003 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:27:45.611208    8003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:27:45.614245    8003 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:27:45.617220    8003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:27:45.620408    8003 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:27:45.624222    8003 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:27:45.631221    8003 start.go:297] selected driver: qemu2
	I1205 11:27:45.631229    8003 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:27:45.631237    8003 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:27:45.633785    8003 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:27:45.637239    8003 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:27:45.638681    8003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:27:45.638700    8003 cni.go:84] Creating CNI manager for ""
	I1205 11:27:45.638724    8003 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:27:45.638732    8003 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:27:45.638778    8003 start.go:340] cluster config:
	{Name:addons-656000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-656000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:27:45.643402    8003 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:27:45.651250    8003 out.go:177] * Starting "addons-656000" primary control-plane node in "addons-656000" cluster
	I1205 11:27:45.655119    8003 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:27:45.655135    8003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:27:45.655147    8003 cache.go:56] Caching tarball of preloaded images
	I1205 11:27:45.655226    8003 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:27:45.655237    8003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:27:45.655450    8003 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/addons-656000/config.json ...
	I1205 11:27:45.655463    8003 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/addons-656000/config.json: {Name:mk8cfe2172592d81496d4f8c74d09aa6df84568f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:27:45.655836    8003 start.go:360] acquireMachinesLock for addons-656000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:27:45.656081    8003 start.go:364] duration metric: took 238.375µs to acquireMachinesLock for "addons-656000"
	I1205 11:27:45.656096    8003 start.go:93] Provisioning new machine with config: &{Name:addons-656000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-656000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:27:45.656124    8003 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:27:45.664274    8003 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 11:27:45.684775    8003 start.go:159] libmachine.API.Create for "addons-656000" (driver="qemu2")
	I1205 11:27:45.684809    8003 client.go:168] LocalClient.Create starting
	I1205 11:27:45.684967    8003 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:27:45.793232    8003 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:27:45.901616    8003 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:27:46.110966    8003 main.go:141] libmachine: Creating SSH key...
	I1205 11:27:46.302154    8003 main.go:141] libmachine: Creating Disk image...
	I1205 11:27:46.302162    8003 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:27:46.302422    8003 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2
	I1205 11:27:46.313042    8003 main.go:141] libmachine: STDOUT: 
	I1205 11:27:46.313066    8003 main.go:141] libmachine: STDERR: 
	I1205 11:27:46.313133    8003 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2 +20000M
	I1205 11:27:46.321816    8003 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:27:46.321833    8003 main.go:141] libmachine: STDERR: 
	I1205 11:27:46.321847    8003 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2
	I1205 11:27:46.321858    8003 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:27:46.321898    8003 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:27:46.321931    8003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:cc:e5:bf:dd:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2
	I1205 11:27:46.323766    8003 main.go:141] libmachine: STDOUT: 
	I1205 11:27:46.323783    8003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:27:46.323810    8003 client.go:171] duration metric: took 638.991833ms to LocalClient.Create
	I1205 11:27:48.325966    8003 start.go:128] duration metric: took 2.669843459s to createHost
	I1205 11:27:48.326030    8003 start.go:83] releasing machines lock for "addons-656000", held for 2.66996175s
	W1205 11:27:48.326081    8003 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:27:48.340766    8003 out.go:177] * Deleting "addons-656000" in qemu2 ...
	W1205 11:27:48.368489    8003 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:27:48.368518    8003 start.go:729] Will try again in 5 seconds ...
	I1205 11:27:53.370659    8003 start.go:360] acquireMachinesLock for addons-656000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:27:53.371151    8003 start.go:364] duration metric: took 414.5µs to acquireMachinesLock for "addons-656000"
	I1205 11:27:53.371253    8003 start.go:93] Provisioning new machine with config: &{Name:addons-656000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-656000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:27:53.371580    8003 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:27:53.385480    8003 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 11:27:53.435007    8003 start.go:159] libmachine.API.Create for "addons-656000" (driver="qemu2")
	I1205 11:27:53.435071    8003 client.go:168] LocalClient.Create starting
	I1205 11:27:53.435310    8003 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:27:53.435406    8003 main.go:141] libmachine: Decoding PEM data...
	I1205 11:27:53.435439    8003 main.go:141] libmachine: Parsing certificate...
	I1205 11:27:53.435522    8003 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:27:53.435582    8003 main.go:141] libmachine: Decoding PEM data...
	I1205 11:27:53.435596    8003 main.go:141] libmachine: Parsing certificate...
	I1205 11:27:53.436250    8003 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:27:53.611012    8003 main.go:141] libmachine: Creating SSH key...
	I1205 11:27:53.695695    8003 main.go:141] libmachine: Creating Disk image...
	I1205 11:27:53.695700    8003 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:27:53.695890    8003 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2
	I1205 11:27:53.706376    8003 main.go:141] libmachine: STDOUT: 
	I1205 11:27:53.706391    8003 main.go:141] libmachine: STDERR: 
	I1205 11:27:53.706470    8003 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2 +20000M
	I1205 11:27:53.715356    8003 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:27:53.715371    8003 main.go:141] libmachine: STDERR: 
	I1205 11:27:53.715384    8003 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2
	I1205 11:27:53.715392    8003 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:27:53.715399    8003 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:27:53.715436    8003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:69:5d:9c:02:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/addons-656000/disk.qcow2
	I1205 11:27:53.717380    8003 main.go:141] libmachine: STDOUT: 
	I1205 11:27:53.717393    8003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:27:53.717406    8003 client.go:171] duration metric: took 282.310333ms to LocalClient.Create
	I1205 11:27:55.718585    8003 start.go:128] duration metric: took 2.346968416s to createHost
	I1205 11:27:55.718663    8003 start.go:83] releasing machines lock for "addons-656000", held for 2.347509291s
	W1205 11:27:55.719075    8003 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-656000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-656000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:27:55.727485    8003 out.go:201] 
	W1205 11:27:55.732594    8003 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:27:55.732643    8003 out.go:270] * 
	* 
	W1205 11:27:55.735286    8003 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:27:55.752463    8003 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-656000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.23s)

                                                
                                    
x
+
TestCertOptions (10.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-279000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-279000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.890646917s)

                                                
                                                
-- stdout --
	* [cert-options-279000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-279000" primary control-plane node in "cert-options-279000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-279000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-279000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-279000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-279000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.670625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-279000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-279000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-279000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-279000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-279000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-279000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (44.606334ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-279000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-279000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-279000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-279000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-279000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-05 11:50:14.18117 -0800 PST m=+1374.160990960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-279000 -n cert-options-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-279000 -n cert-options-279000: exit status 7 (33.161166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-279000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-279000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-279000
--- FAIL: TestCertOptions (10.23s)

                                                
                                    
x
+
TestCertExpiration (195.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-187000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-187000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.982224375s)

                                                
                                                
-- stdout --
	* [cert-expiration-187000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-187000" primary control-plane node in "cert-expiration-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-187000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-187000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-187000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.240537334s)

                                                
                                                
-- stdout --
	* [cert-expiration-187000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-187000" primary control-plane node in "cert-expiration-187000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-187000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-187000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-187000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-187000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-187000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-187000" primary control-plane node in "cert-expiration-187000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-187000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-187000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-187000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-05 11:53:04.028096 -0800 PST m=+1544.022487126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-187000 -n cert-expiration-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-187000 -n cert-expiration-187000: exit status 7 (73.703917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-187000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-187000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-187000
--- FAIL: TestCertExpiration (195.39s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-984000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-984000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.863069125s)

                                                
                                                
-- stdout --
	* [docker-flags-984000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-984000" primary control-plane node in "docker-flags-984000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-984000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:49:53.982184   10215 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:49:53.982338   10215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:49:53.982341   10215 out.go:358] Setting ErrFile to fd 2...
	I1205 11:49:53.982348   10215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:49:53.982481   10215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:49:53.983588   10215 out.go:352] Setting JSON to false
	I1205 11:49:54.001366   10215 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6562,"bootTime":1733421631,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:49:54.001466   10215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:49:54.007016   10215 out.go:177] * [docker-flags-984000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:49:54.013907   10215 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:49:54.013969   10215 notify.go:220] Checking for updates...
	I1205 11:49:54.025015   10215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:49:54.027958   10215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:49:54.030978   10215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:49:54.034013   10215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:49:54.035352   10215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:49:54.038371   10215 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:49:54.038465   10215 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:49:54.038518   10215 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:49:54.042976   10215 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:49:54.048005   10215 start.go:297] selected driver: qemu2
	I1205 11:49:54.048012   10215 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:49:54.048018   10215 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:49:54.050625   10215 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:49:54.054058   10215 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:49:54.057094   10215 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1205 11:49:54.057121   10215 cni.go:84] Creating CNI manager for ""
	I1205 11:49:54.057150   10215 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:49:54.057155   10215 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:49:54.057186   10215 start.go:340] cluster config:
	{Name:docker-flags-984000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:49:54.061947   10215 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:49:54.069026   10215 out.go:177] * Starting "docker-flags-984000" primary control-plane node in "docker-flags-984000" cluster
	I1205 11:49:54.072952   10215 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:49:54.072971   10215 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:49:54.072981   10215 cache.go:56] Caching tarball of preloaded images
	I1205 11:49:54.073052   10215 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:49:54.073058   10215 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:49:54.073108   10215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/docker-flags-984000/config.json ...
	I1205 11:49:54.073119   10215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/docker-flags-984000/config.json: {Name:mkaf1d5bb712258649f7f210bc250a75aaba4bba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:49:54.073465   10215 start.go:360] acquireMachinesLock for docker-flags-984000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:49:54.073514   10215 start.go:364] duration metric: took 43µs to acquireMachinesLock for "docker-flags-984000"
	I1205 11:49:54.073526   10215 start.go:93] Provisioning new machine with config: &{Name:docker-flags-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:49:54.073560   10215 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:49:54.079972   10215 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:49:54.097177   10215 start.go:159] libmachine.API.Create for "docker-flags-984000" (driver="qemu2")
	I1205 11:49:54.097201   10215 client.go:168] LocalClient.Create starting
	I1205 11:49:54.097272   10215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:49:54.097309   10215 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:54.097322   10215 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:54.097368   10215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:49:54.097399   10215 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:54.097406   10215 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:54.097771   10215 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:49:54.256367   10215 main.go:141] libmachine: Creating SSH key...
	I1205 11:49:54.396138   10215 main.go:141] libmachine: Creating Disk image...
	I1205 11:49:54.396144   10215 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:49:54.396364   10215 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2
	I1205 11:49:54.406858   10215 main.go:141] libmachine: STDOUT: 
	I1205 11:49:54.406888   10215 main.go:141] libmachine: STDERR: 
	I1205 11:49:54.406942   10215 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2 +20000M
	I1205 11:49:54.415670   10215 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:49:54.415688   10215 main.go:141] libmachine: STDERR: 
	I1205 11:49:54.415706   10215 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2
	I1205 11:49:54.415712   10215 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:49:54.415723   10215 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:49:54.415750   10215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:74:aa:e1:44:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2
	I1205 11:49:54.417591   10215 main.go:141] libmachine: STDOUT: 
	I1205 11:49:54.417610   10215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:49:54.417629   10215 client.go:171] duration metric: took 320.425709ms to LocalClient.Create
	I1205 11:49:56.419834   10215 start.go:128] duration metric: took 2.346273459s to createHost
	I1205 11:49:56.419915   10215 start.go:83] releasing machines lock for "docker-flags-984000", held for 2.346413417s
	W1205 11:49:56.419954   10215 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:56.443072   10215 out.go:177] * Deleting "docker-flags-984000" in qemu2 ...
	W1205 11:49:56.464611   10215 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:56.464633   10215 start.go:729] Will try again in 5 seconds ...
	I1205 11:50:01.466541   10215 start.go:360] acquireMachinesLock for docker-flags-984000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:01.467299   10215 start.go:364] duration metric: took 617.5µs to acquireMachinesLock for "docker-flags-984000"
	I1205 11:50:01.467456   10215 start.go:93] Provisioning new machine with config: &{Name:docker-flags-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:01.467737   10215 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:01.473401   10215 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:50:01.523487   10215 start.go:159] libmachine.API.Create for "docker-flags-984000" (driver="qemu2")
	I1205 11:50:01.523528   10215 client.go:168] LocalClient.Create starting
	I1205 11:50:01.523640   10215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:01.523709   10215 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:01.523726   10215 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:01.523789   10215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:01.523823   10215 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:01.523860   10215 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:01.524519   10215 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:01.691495   10215 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:01.745338   10215 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:01.745343   10215 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:01.745529   10215 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2
	I1205 11:50:01.755614   10215 main.go:141] libmachine: STDOUT: 
	I1205 11:50:01.755640   10215 main.go:141] libmachine: STDERR: 
	I1205 11:50:01.755698   10215 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2 +20000M
	I1205 11:50:01.764265   10215 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:01.764281   10215 main.go:141] libmachine: STDERR: 
	I1205 11:50:01.764294   10215 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2
	I1205 11:50:01.764300   10215 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:01.764308   10215 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:01.764340   10215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:fa:80:31:aa:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/docker-flags-984000/disk.qcow2
	I1205 11:50:01.766152   10215 main.go:141] libmachine: STDOUT: 
	I1205 11:50:01.766167   10215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:01.766179   10215 client.go:171] duration metric: took 242.649125ms to LocalClient.Create
	I1205 11:50:03.768332   10215 start.go:128] duration metric: took 2.30058725s to createHost
	I1205 11:50:03.768437   10215 start.go:83] releasing machines lock for "docker-flags-984000", held for 2.301133083s
	W1205 11:50:03.768798   10215 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:03.782529   10215 out.go:201] 
	W1205 11:50:03.786670   10215 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:50:03.786720   10215 out.go:270] * 
	* 
	W1205 11:50:03.789584   10215 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:50:03.800420   10215 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-984000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-984000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-984000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (81.304625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-984000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-984000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-984000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-984000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-984000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-984000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-984000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-984000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-984000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.810375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-984000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-984000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-984000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-984000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-984000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-984000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-05 11:50:03.943204 -0800 PST m=+1363.922932376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-984000 -n docker-flags-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-984000 -n docker-flags-984000: exit status 7 (33.998667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-984000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-984000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-984000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-774000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-774000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.889759041s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-774000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-774000" primary control-plane node in "force-systemd-flag-774000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-774000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:49:24.652979   10074 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:49:24.653128   10074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:49:24.653130   10074 out.go:358] Setting ErrFile to fd 2...
	I1205 11:49:24.653133   10074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:49:24.653270   10074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:49:24.654472   10074 out.go:352] Setting JSON to false
	I1205 11:49:24.672352   10074 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6533,"bootTime":1733421631,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:49:24.672428   10074 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:49:24.676569   10074 out.go:177] * [force-systemd-flag-774000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:49:24.682430   10074 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:49:24.682469   10074 notify.go:220] Checking for updates...
	I1205 11:49:24.689547   10074 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:49:24.692704   10074 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:49:24.695525   10074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:49:24.698640   10074 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:49:24.699612   10074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:49:24.702896   10074 config.go:182] Loaded profile config "NoKubernetes-344000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1205 11:49:24.702974   10074 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:49:24.703021   10074 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:49:24.706572   10074 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:49:24.711545   10074 start.go:297] selected driver: qemu2
	I1205 11:49:24.711555   10074 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:49:24.711562   10074 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:49:24.714125   10074 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:49:24.717519   10074 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:49:24.720638   10074 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:49:24.720661   10074 cni.go:84] Creating CNI manager for ""
	I1205 11:49:24.720689   10074 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:49:24.720695   10074 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:49:24.720742   10074 start.go:340] cluster config:
	{Name:force-systemd-flag-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-774000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:49:24.725193   10074 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:49:24.732540   10074 out.go:177] * Starting "force-systemd-flag-774000" primary control-plane node in "force-systemd-flag-774000" cluster
	I1205 11:49:24.736504   10074 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:49:24.736526   10074 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:49:24.736558   10074 cache.go:56] Caching tarball of preloaded images
	I1205 11:49:24.736649   10074 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:49:24.736655   10074 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:49:24.736729   10074 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/force-systemd-flag-774000/config.json ...
	I1205 11:49:24.736740   10074 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/force-systemd-flag-774000/config.json: {Name:mk9cb6a689e111eeda95dbe25ee1d57fe42599e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:49:24.737061   10074 start.go:360] acquireMachinesLock for force-systemd-flag-774000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:49:24.737113   10074 start.go:364] duration metric: took 44.125µs to acquireMachinesLock for "force-systemd-flag-774000"
	I1205 11:49:24.737125   10074 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-774000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:49:24.737172   10074 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:49:24.741642   10074 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:49:24.758775   10074 start.go:159] libmachine.API.Create for "force-systemd-flag-774000" (driver="qemu2")
	I1205 11:49:24.758802   10074 client.go:168] LocalClient.Create starting
	I1205 11:49:24.758875   10074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:49:24.758915   10074 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:24.758925   10074 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:24.758965   10074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:49:24.758993   10074 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:24.759001   10074 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:24.759355   10074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:49:24.915321   10074 main.go:141] libmachine: Creating SSH key...
	I1205 11:49:25.037674   10074 main.go:141] libmachine: Creating Disk image...
	I1205 11:49:25.037679   10074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:49:25.037863   10074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2
	I1205 11:49:25.047891   10074 main.go:141] libmachine: STDOUT: 
	I1205 11:49:25.047911   10074 main.go:141] libmachine: STDERR: 
	I1205 11:49:25.047967   10074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2 +20000M
	I1205 11:49:25.056638   10074 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:49:25.056662   10074 main.go:141] libmachine: STDERR: 
	I1205 11:49:25.056680   10074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2
	I1205 11:49:25.056685   10074 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:49:25.056697   10074 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:49:25.056725   10074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:97:95:8f:10:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2
	I1205 11:49:25.058575   10074 main.go:141] libmachine: STDOUT: 
	I1205 11:49:25.058596   10074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:49:25.058621   10074 client.go:171] duration metric: took 299.816292ms to LocalClient.Create
	I1205 11:49:27.060761   10074 start.go:128] duration metric: took 2.323594166s to createHost
	I1205 11:49:27.060814   10074 start.go:83] releasing machines lock for "force-systemd-flag-774000", held for 2.323711917s
	W1205 11:49:27.060875   10074 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:27.086618   10074 out.go:177] * Deleting "force-systemd-flag-774000" in qemu2 ...
	W1205 11:49:27.138931   10074 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:27.138975   10074 start.go:729] Will try again in 5 seconds ...
	I1205 11:49:32.139268   10074 start.go:360] acquireMachinesLock for force-systemd-flag-774000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:49:32.139898   10074 start.go:364] duration metric: took 473.75µs to acquireMachinesLock for "force-systemd-flag-774000"
	I1205 11:49:32.140095   10074 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-774000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:49:32.140516   10074 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:49:32.146255   10074 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:49:32.195931   10074 start.go:159] libmachine.API.Create for "force-systemd-flag-774000" (driver="qemu2")
	I1205 11:49:32.195994   10074 client.go:168] LocalClient.Create starting
	I1205 11:49:32.196140   10074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:49:32.196213   10074 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:32.196233   10074 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:32.196303   10074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:49:32.196363   10074 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:32.196377   10074 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:32.197043   10074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:49:32.370771   10074 main.go:141] libmachine: Creating SSH key...
	I1205 11:49:32.437955   10074 main.go:141] libmachine: Creating Disk image...
	I1205 11:49:32.437961   10074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:49:32.438160   10074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2
	I1205 11:49:32.448557   10074 main.go:141] libmachine: STDOUT: 
	I1205 11:49:32.448576   10074 main.go:141] libmachine: STDERR: 
	I1205 11:49:32.448635   10074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2 +20000M
	I1205 11:49:32.457228   10074 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:49:32.457243   10074 main.go:141] libmachine: STDERR: 
	I1205 11:49:32.457257   10074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2
	I1205 11:49:32.457264   10074 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:49:32.457273   10074 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:49:32.457309   10074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:05:70:c0:8f:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-flag-774000/disk.qcow2
	I1205 11:49:32.459130   10074 main.go:141] libmachine: STDOUT: 
	I1205 11:49:32.459146   10074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:49:32.459158   10074 client.go:171] duration metric: took 263.16075ms to LocalClient.Create
	I1205 11:49:34.461326   10074 start.go:128] duration metric: took 2.320791041s to createHost
	I1205 11:49:34.461388   10074 start.go:83] releasing machines lock for "force-systemd-flag-774000", held for 2.321457625s
	W1205 11:49:34.461871   10074 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:34.477581   10074 out.go:201] 
	W1205 11:49:34.480381   10074 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:49:34.480414   10074 out.go:270] * 
	* 
	W1205 11:49:34.483025   10074 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:49:34.490525   10074 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-774000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-774000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-774000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (88.568333ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-774000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-774000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-774000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-05 11:49:34.60049 -0800 PST m=+1334.579952001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-774000 -n force-systemd-flag-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-774000 -n force-systemd-flag-774000: exit status 7 (42.06775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-774000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-774000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-774000
--- FAIL: TestForceSystemdFlag (10.11s)

                                                
                                    
x
+
TestForceSystemdEnv (10.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-434000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1205 11:49:43.614272    7922 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/001/docker-machine-driver-hyperkit]
I1205 11:49:43.628801    7922 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/001/docker-machine-driver-hyperkit]
I1205 11:49:43.651329    7922 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 11:49:43.651468    7922 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1205 11:49:45.414204    7922 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1205 11:49:45.414230    7922 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1205 11:49:45.414279    7922 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1205 11:49:45.414317    7922 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit
I1205 11:49:45.807136    7922 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x107bc16e0 0x107bc16e0 0x107bc16e0 0x107bc16e0 0x107bc16e0 0x107bc16e0 0x107bc16e0] Decompressors:map[bz2:0x1400081a7c8 gz:0x1400081a980 tar:0x1400081a890 tar.bz2:0x1400081a900 tar.gz:0x1400081a930 tar.xz:0x1400081a940 tar.zst:0x1400081a950 tbz2:0x1400081a900 tgz:0x1400081a930 txz:0x1400081a940 tzst:0x1400081a950 xz:0x1400081a988 zip:0x1400081a9e0 zst:0x1400081a9f0] Getters:map[file:0x14001434e20 http:0x140006dc3c0 https:0x140006dc410] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1205 11:49:45.807269    7922 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit
I1205 11:49:48.763826    7922 install.go:79] stdout: 
W1205 11:49:48.764004    7922 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1205 11:49:48.764028    7922 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit]
I1205 11:49:48.780785    7922 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit]
I1205 11:49:48.793515    7922 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit]
I1205 11:49:48.804011    7922 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-434000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.141582917s)

                                                
                                                
-- stdout --
	* [force-systemd-env-434000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-434000" primary control-plane node in "force-systemd-env-434000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-434000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:49:43.635550   10166 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:49:43.635667   10166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:49:43.635670   10166 out.go:358] Setting ErrFile to fd 2...
	I1205 11:49:43.635673   10166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:49:43.635807   10166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:49:43.636924   10166 out.go:352] Setting JSON to false
	I1205 11:49:43.656187   10166 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6552,"bootTime":1733421631,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:49:43.656269   10166 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:49:43.662203   10166 out.go:177] * [force-systemd-env-434000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:49:43.669070   10166 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:49:43.669113   10166 notify.go:220] Checking for updates...
	I1205 11:49:43.677144   10166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:49:43.680039   10166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:49:43.683154   10166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:49:43.686143   10166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:49:43.689100   10166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1205 11:49:43.692501   10166 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:49:43.692549   10166 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:49:43.697122   10166 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:49:43.704100   10166 start.go:297] selected driver: qemu2
	I1205 11:49:43.704109   10166 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:49:43.704116   10166 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:49:43.706830   10166 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:49:43.710080   10166 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:49:43.713144   10166 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:49:43.713162   10166 cni.go:84] Creating CNI manager for ""
	I1205 11:49:43.713184   10166 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:49:43.713188   10166 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:49:43.713224   10166 start.go:340] cluster config:
	{Name:force-systemd-env-434000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:49:43.718404   10166 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:49:43.725149   10166 out.go:177] * Starting "force-systemd-env-434000" primary control-plane node in "force-systemd-env-434000" cluster
	I1205 11:49:43.729070   10166 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:49:43.729097   10166 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:49:43.729108   10166 cache.go:56] Caching tarball of preloaded images
	I1205 11:49:43.729211   10166 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:49:43.729219   10166 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:49:43.729282   10166 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/force-systemd-env-434000/config.json ...
	I1205 11:49:43.729294   10166 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/force-systemd-env-434000/config.json: {Name:mk49410d44758ef6c098a8d38844fae57e4c06b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:49:43.729678   10166 start.go:360] acquireMachinesLock for force-systemd-env-434000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:49:43.729733   10166 start.go:364] duration metric: took 46.333µs to acquireMachinesLock for "force-systemd-env-434000"
	I1205 11:49:43.729747   10166 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:49:43.729777   10166 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:49:43.738078   10166 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:49:43.755943   10166 start.go:159] libmachine.API.Create for "force-systemd-env-434000" (driver="qemu2")
	I1205 11:49:43.755974   10166 client.go:168] LocalClient.Create starting
	I1205 11:49:43.756051   10166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:49:43.756088   10166 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:43.756102   10166 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:43.756141   10166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:49:43.756172   10166 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:43.756183   10166 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:43.756589   10166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:49:43.907560   10166 main.go:141] libmachine: Creating SSH key...
	I1205 11:49:44.151445   10166 main.go:141] libmachine: Creating Disk image...
	I1205 11:49:44.151455   10166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:49:44.151709   10166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2
	I1205 11:49:44.162175   10166 main.go:141] libmachine: STDOUT: 
	I1205 11:49:44.162199   10166 main.go:141] libmachine: STDERR: 
	I1205 11:49:44.162256   10166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2 +20000M
	I1205 11:49:44.171252   10166 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:49:44.171267   10166 main.go:141] libmachine: STDERR: 
	I1205 11:49:44.171292   10166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2
	I1205 11:49:44.171299   10166 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:49:44.171310   10166 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:49:44.171336   10166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:66:0a:2d:9b:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2
	I1205 11:49:44.173194   10166 main.go:141] libmachine: STDOUT: 
	I1205 11:49:44.173207   10166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:49:44.173227   10166 client.go:171] duration metric: took 417.250917ms to LocalClient.Create
	I1205 11:49:46.175433   10166 start.go:128] duration metric: took 2.445645875s to createHost
	I1205 11:49:46.175499   10166 start.go:83] releasing machines lock for "force-systemd-env-434000", held for 2.445777042s
	W1205 11:49:46.175537   10166 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:46.192621   10166 out.go:177] * Deleting "force-systemd-env-434000" in qemu2 ...
	W1205 11:49:46.220256   10166 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:46.220340   10166 start.go:729] Will try again in 5 seconds ...
	I1205 11:49:51.222565   10166 start.go:360] acquireMachinesLock for force-systemd-env-434000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:49:51.342722   10166 start.go:364] duration metric: took 120.052834ms to acquireMachinesLock for "force-systemd-env-434000"
	I1205 11:49:51.343342   10166 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:49:51.343537   10166 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:49:51.358210   10166 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:49:51.410608   10166 start.go:159] libmachine.API.Create for "force-systemd-env-434000" (driver="qemu2")
	I1205 11:49:51.410655   10166 client.go:168] LocalClient.Create starting
	I1205 11:49:51.410808   10166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:49:51.410893   10166 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:51.410912   10166 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:51.410974   10166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:49:51.411030   10166 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:51.411042   10166 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:51.411670   10166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:49:51.579888   10166 main.go:141] libmachine: Creating SSH key...
	I1205 11:49:51.671534   10166 main.go:141] libmachine: Creating Disk image...
	I1205 11:49:51.671539   10166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:49:51.671748   10166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2
	I1205 11:49:51.682157   10166 main.go:141] libmachine: STDOUT: 
	I1205 11:49:51.682178   10166 main.go:141] libmachine: STDERR: 
	I1205 11:49:51.682235   10166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2 +20000M
	I1205 11:49:51.690768   10166 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:49:51.690783   10166 main.go:141] libmachine: STDERR: 
	I1205 11:49:51.690794   10166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2
	I1205 11:49:51.690800   10166 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:49:51.690808   10166 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:49:51.690840   10166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:42:7e:ff:1f:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/force-systemd-env-434000/disk.qcow2
	I1205 11:49:51.692643   10166 main.go:141] libmachine: STDOUT: 
	I1205 11:49:51.692657   10166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:49:51.692668   10166 client.go:171] duration metric: took 282.011167ms to LocalClient.Create
	I1205 11:49:53.694930   10166 start.go:128] duration metric: took 2.351352292s to createHost
	I1205 11:49:53.695002   10166 start.go:83] releasing machines lock for "force-systemd-env-434000", held for 2.352252458s
	W1205 11:49:53.695368   10166 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:53.708207   10166 out.go:201] 
	W1205 11:49:53.715166   10166 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:49:53.715195   10166 out.go:270] * 
	* 
	W1205 11:49:53.717813   10166 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:49:53.727985   10166 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-434000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-434000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-434000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (89.530208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-434000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-434000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-434000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-05 11:49:53.835964 -0800 PST m=+1353.815601168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-434000 -n force-systemd-env-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-434000 -n force-systemd-env-434000: exit status 7 (35.408834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-434000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-434000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-434000
--- FAIL: TestForceSystemdEnv (10.35s)

                                                
                                    
x
+
TestErrorSpam/setup (9.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-444000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-444000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 --driver=qemu2 : exit status 80 (9.9474105s)

                                                
                                                
-- stdout --
	* [nospam-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-444000" primary control-plane node in "nospam-444000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-444000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-444000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-444000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-444000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20053
- KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-444000" primary control-plane node in "nospam-444000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-444000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-444000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.95s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-234000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-234000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.86522925s)

                                                
                                                
-- stdout --
	* [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-234000" primary control-plane node in "functional-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:56312 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:56312 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:56312 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-234000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20053
- KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-234000" primary control-plane node in "functional-234000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-234000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:56312 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:56312 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:56312 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (74.952875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 11:28:27.561365    7922 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-234000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-234000 --alsologtostderr -v=8: exit status 80 (5.192259292s)

                                                
                                                
-- stdout --
	* [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-234000" primary control-plane node in "functional-234000" cluster
	* Restarting existing qemu2 VM for "functional-234000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-234000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:28:27.594692    8149 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:28:27.594852    8149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:28:27.594855    8149 out.go:358] Setting ErrFile to fd 2...
	I1205 11:28:27.594858    8149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:28:27.594991    8149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:28:27.596056    8149 out.go:352] Setting JSON to false
	I1205 11:28:27.613598    8149 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5276,"bootTime":1733421631,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:28:27.613676    8149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:28:27.618694    8149 out.go:177] * [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:28:27.626824    8149 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:28:27.626887    8149 notify.go:220] Checking for updates...
	I1205 11:28:27.633766    8149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:28:27.636758    8149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:28:27.639679    8149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:28:27.642754    8149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:28:27.645806    8149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:28:27.649007    8149 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:28:27.649068    8149 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:28:27.653734    8149 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:28:27.659722    8149 start.go:297] selected driver: qemu2
	I1205 11:28:27.659729    8149 start.go:901] validating driver "qemu2" against &{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:28:27.659771    8149 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:28:27.662292    8149 cni.go:84] Creating CNI manager for ""
	I1205 11:28:27.662332    8149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:28:27.662388    8149 start.go:340] cluster config:
	{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:28:27.666917    8149 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:28:27.674780    8149 out.go:177] * Starting "functional-234000" primary control-plane node in "functional-234000" cluster
	I1205 11:28:27.678741    8149 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:28:27.678757    8149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:28:27.678772    8149 cache.go:56] Caching tarball of preloaded images
	I1205 11:28:27.678849    8149 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:28:27.678855    8149 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:28:27.678914    8149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/functional-234000/config.json ...
	I1205 11:28:27.679366    8149 start.go:360] acquireMachinesLock for functional-234000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:28:27.679396    8149 start.go:364] duration metric: took 23.917µs to acquireMachinesLock for "functional-234000"
	I1205 11:28:27.679405    8149 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:28:27.679410    8149 fix.go:54] fixHost starting: 
	I1205 11:28:27.679542    8149 fix.go:112] recreateIfNeeded on functional-234000: state=Stopped err=<nil>
	W1205 11:28:27.679549    8149 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:28:27.686819    8149 out.go:177] * Restarting existing qemu2 VM for "functional-234000" ...
	I1205 11:28:27.690769    8149 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:28:27.690821    8149 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:23:cf:73:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/disk.qcow2
	I1205 11:28:27.693128    8149 main.go:141] libmachine: STDOUT: 
	I1205 11:28:27.693148    8149 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:28:27.693179    8149 fix.go:56] duration metric: took 13.768417ms for fixHost
	I1205 11:28:27.693183    8149 start.go:83] releasing machines lock for "functional-234000", held for 13.78275ms
	W1205 11:28:27.693189    8149 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:28:27.693236    8149 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:28:27.693241    8149 start.go:729] Will try again in 5 seconds ...
	I1205 11:28:32.695410    8149 start.go:360] acquireMachinesLock for functional-234000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:28:32.695847    8149 start.go:364] duration metric: took 351.292µs to acquireMachinesLock for "functional-234000"
	I1205 11:28:32.695959    8149 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:28:32.695977    8149 fix.go:54] fixHost starting: 
	I1205 11:28:32.696696    8149 fix.go:112] recreateIfNeeded on functional-234000: state=Stopped err=<nil>
	W1205 11:28:32.696723    8149 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:28:32.703940    8149 out.go:177] * Restarting existing qemu2 VM for "functional-234000" ...
	I1205 11:28:32.708045    8149 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:28:32.708293    8149 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:23:cf:73:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/disk.qcow2
	I1205 11:28:32.718203    8149 main.go:141] libmachine: STDOUT: 
	I1205 11:28:32.718294    8149 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:28:32.718373    8149 fix.go:56] duration metric: took 22.392125ms for fixHost
	I1205 11:28:32.718392    8149 start.go:83] releasing machines lock for "functional-234000", held for 22.522916ms
	W1205 11:28:32.718608    8149 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-234000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-234000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:28:32.726036    8149 out.go:201] 
	W1205 11:28:32.730201    8149 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:28:32.730227    8149 out.go:270] * 
	* 
	W1205 11:28:32.733184    8149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:28:32.740110    8149 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-234000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.193906625s for "functional-234000" cluster.
I1205 11:28:32.755532    7922 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (73.847125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.104292ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-234000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (35.429625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-234000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-234000 get po -A: exit status 1 (26.599542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-234000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-234000\n"*: args "kubectl --context functional-234000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-234000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (35.162917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh sudo crictl images: exit status 83 (45.9495ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-234000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (45.841292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-234000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.777583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (45.909083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-234000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 kubectl -- --context functional-234000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 kubectl -- --context functional-234000 get pods: exit status 1 (711.970167ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-234000
	* no server found for cluster "functional-234000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-234000 kubectl -- --context functional-234000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (35.166291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-234000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-234000 get pods: exit status 1 (1.171918833s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-234000
	* no server found for cluster "functional-234000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-234000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (33.343375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.21s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-234000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-234000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.191177375s)

                                                
                                                
-- stdout --
	* [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-234000" primary control-plane node in "functional-234000" cluster
	* Restarting existing qemu2 VM for "functional-234000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-234000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-234000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-234000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.192178417s for "functional-234000" cluster.
I1205 11:28:43.579469    7922 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (71.647875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-234000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-234000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.015667ms)

                                                
                                                
** stderr ** 
	error: context "functional-234000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-234000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (34.312875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 logs: exit status 83 (80.547625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | -p download-only-019000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| delete  | -p download-only-019000                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| start   | -o=json --download-only                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | -p download-only-727000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| delete  | -p download-only-727000                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| delete  | -p download-only-019000                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| delete  | -p download-only-727000                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| start   | --download-only -p                                                       | binary-mirror-263000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | binary-mirror-263000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:56275                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-263000                                                  | binary-mirror-263000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| addons  | enable dashboard -p                                                      | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | addons-656000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | addons-656000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-656000 --wait=true                                             | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-656000                                                         | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| start   | -p nospam-444000 -n=1 --memory=2250 --wait=false                         | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-444000                                                         | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | minikube-local-cache-test:functional-234000                              |                      |         |         |                     |                     |
	| cache   | functional-234000 cache delete                                           | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | minikube-local-cache-test:functional-234000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	| ssh     | functional-234000 ssh sudo                                               | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-234000                                                        | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-234000 ssh                                                    | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-234000 cache reload                                           | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	| ssh     | functional-234000 ssh                                                    | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-234000 kubectl --                                             | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | --context functional-234000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 11:28:38
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 11:28:38.417508    8224 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:28:38.417652    8224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:28:38.417654    8224 out.go:358] Setting ErrFile to fd 2...
	I1205 11:28:38.417655    8224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:28:38.417759    8224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:28:38.418813    8224 out.go:352] Setting JSON to false
	I1205 11:28:38.436555    8224 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5287,"bootTime":1733421631,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:28:38.436628    8224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:28:38.442734    8224 out.go:177] * [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:28:38.450901    8224 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:28:38.450938    8224 notify.go:220] Checking for updates...
	I1205 11:28:38.458807    8224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:28:38.461876    8224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:28:38.464856    8224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:28:38.467828    8224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:28:38.470937    8224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:28:38.474073    8224 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:28:38.474134    8224 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:28:38.478875    8224 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:28:38.485873    8224 start.go:297] selected driver: qemu2
	I1205 11:28:38.485877    8224 start.go:901] validating driver "qemu2" against &{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:28:38.485922    8224 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:28:38.488448    8224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:28:38.488469    8224 cni.go:84] Creating CNI manager for ""
	I1205 11:28:38.488497    8224 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:28:38.488551    8224 start.go:340] cluster config:
	{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:28:38.493035    8224 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:28:38.499836    8224 out.go:177] * Starting "functional-234000" primary control-plane node in "functional-234000" cluster
	I1205 11:28:38.503861    8224 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:28:38.503874    8224 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:28:38.503887    8224 cache.go:56] Caching tarball of preloaded images
	I1205 11:28:38.503963    8224 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:28:38.503975    8224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:28:38.504022    8224 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/functional-234000/config.json ...
	I1205 11:28:38.504436    8224 start.go:360] acquireMachinesLock for functional-234000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:28:38.504484    8224 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "functional-234000"
	I1205 11:28:38.504491    8224 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:28:38.504494    8224 fix.go:54] fixHost starting: 
	I1205 11:28:38.504615    8224 fix.go:112] recreateIfNeeded on functional-234000: state=Stopped err=<nil>
	W1205 11:28:38.504620    8224 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:28:38.507759    8224 out.go:177] * Restarting existing qemu2 VM for "functional-234000" ...
	I1205 11:28:38.515844    8224 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:28:38.515887    8224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:23:cf:73:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/disk.qcow2
	I1205 11:28:38.518301    8224 main.go:141] libmachine: STDOUT: 
	I1205 11:28:38.518314    8224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:28:38.518346    8224 fix.go:56] duration metric: took 13.851541ms for fixHost
	I1205 11:28:38.518350    8224 start.go:83] releasing machines lock for "functional-234000", held for 13.862917ms
	W1205 11:28:38.518355    8224 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:28:38.518392    8224 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:28:38.518397    8224 start.go:729] Will try again in 5 seconds ...
	I1205 11:28:43.520674    8224 start.go:360] acquireMachinesLock for functional-234000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:28:43.521124    8224 start.go:364] duration metric: took 378.833µs to acquireMachinesLock for "functional-234000"
	I1205 11:28:43.521263    8224 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:28:43.521276    8224 fix.go:54] fixHost starting: 
	I1205 11:28:43.521968    8224 fix.go:112] recreateIfNeeded on functional-234000: state=Stopped err=<nil>
	W1205 11:28:43.521990    8224 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:28:43.525573    8224 out.go:177] * Restarting existing qemu2 VM for "functional-234000" ...
	I1205 11:28:43.532483    8224 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:28:43.532716    8224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:23:cf:73:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/disk.qcow2
	I1205 11:28:43.543257    8224 main.go:141] libmachine: STDOUT: 
	I1205 11:28:43.543338    8224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:28:43.543441    8224 fix.go:56] duration metric: took 22.166709ms for fixHost
	I1205 11:28:43.543458    8224 start.go:83] releasing machines lock for "functional-234000", held for 22.31625ms
	W1205 11:28:43.543626    8224 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-234000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:28:43.550500    8224 out.go:201] 
	W1205 11:28:43.554672    8224 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:28:43.554695    8224 out.go:270] * 
	W1205 11:28:43.557552    8224 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:28:43.564471    8224 out.go:201] 
	
	
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-234000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | -p download-only-019000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| delete  | -p download-only-019000                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| start   | -o=json --download-only                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | -p download-only-727000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| delete  | -p download-only-727000                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| delete  | -p download-only-019000                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| delete  | -p download-only-727000                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| start   | --download-only -p                                                       | binary-mirror-263000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | binary-mirror-263000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:56275                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-263000                                                  | binary-mirror-263000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| addons  | enable dashboard -p                                                      | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | addons-656000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | addons-656000                                                            |                      |         |         |                     |                     |
| start   | -p addons-656000 --wait=true                                             | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-656000                                                         | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| start   | -p nospam-444000 -n=1 --memory=2250 --wait=false                         | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-444000                                                         | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | minikube-local-cache-test:functional-234000                              |                      |         |         |                     |                     |
| cache   | functional-234000 cache delete                                           | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | minikube-local-cache-test:functional-234000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
| ssh     | functional-234000 ssh sudo                                               | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-234000                                                        | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-234000 ssh                                                    | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-234000 cache reload                                           | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
| ssh     | functional-234000 ssh                                                    | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-234000 kubectl --                                             | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | --context functional-234000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/05 11:28:38
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1205 11:28:38.417508    8224 out.go:345] Setting OutFile to fd 1 ...
I1205 11:28:38.417652    8224 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:28:38.417654    8224 out.go:358] Setting ErrFile to fd 2...
I1205 11:28:38.417655    8224 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:28:38.417759    8224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:28:38.418813    8224 out.go:352] Setting JSON to false
I1205 11:28:38.436555    8224 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5287,"bootTime":1733421631,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W1205 11:28:38.436628    8224 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1205 11:28:38.442734    8224 out.go:177] * [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1205 11:28:38.450901    8224 out.go:177]   - MINIKUBE_LOCATION=20053
I1205 11:28:38.450938    8224 notify.go:220] Checking for updates...
I1205 11:28:38.458807    8224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
I1205 11:28:38.461876    8224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1205 11:28:38.464856    8224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1205 11:28:38.467828    8224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
I1205 11:28:38.470937    8224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1205 11:28:38.474073    8224 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:28:38.474134    8224 driver.go:394] Setting default libvirt URI to qemu:///system
I1205 11:28:38.478875    8224 out.go:177] * Using the qemu2 driver based on existing profile
I1205 11:28:38.485873    8224 start.go:297] selected driver: qemu2
I1205 11:28:38.485877    8224 start.go:901] validating driver "qemu2" against &{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 11:28:38.485922    8224 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1205 11:28:38.488448    8224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1205 11:28:38.488469    8224 cni.go:84] Creating CNI manager for ""
I1205 11:28:38.488497    8224 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1205 11:28:38.488551    8224 start.go:340] cluster config:
{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 11:28:38.493035    8224 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 11:28:38.499836    8224 out.go:177] * Starting "functional-234000" primary control-plane node in "functional-234000" cluster
I1205 11:28:38.503861    8224 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1205 11:28:38.503874    8224 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1205 11:28:38.503887    8224 cache.go:56] Caching tarball of preloaded images
I1205 11:28:38.503963    8224 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1205 11:28:38.503975    8224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1205 11:28:38.504022    8224 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/functional-234000/config.json ...
I1205 11:28:38.504436    8224 start.go:360] acquireMachinesLock for functional-234000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 11:28:38.504484    8224 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "functional-234000"
I1205 11:28:38.504491    8224 start.go:96] Skipping create...Using existing machine configuration
I1205 11:28:38.504494    8224 fix.go:54] fixHost starting: 
I1205 11:28:38.504615    8224 fix.go:112] recreateIfNeeded on functional-234000: state=Stopped err=<nil>
W1205 11:28:38.504620    8224 fix.go:138] unexpected machine state, will restart: <nil>
I1205 11:28:38.507759    8224 out.go:177] * Restarting existing qemu2 VM for "functional-234000" ...
I1205 11:28:38.515844    8224 qemu.go:418] Using hvf for hardware acceleration
I1205 11:28:38.515887    8224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:23:cf:73:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/disk.qcow2
I1205 11:28:38.518301    8224 main.go:141] libmachine: STDOUT: 
I1205 11:28:38.518314    8224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1205 11:28:38.518346    8224 fix.go:56] duration metric: took 13.851541ms for fixHost
I1205 11:28:38.518350    8224 start.go:83] releasing machines lock for "functional-234000", held for 13.862917ms
W1205 11:28:38.518355    8224 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1205 11:28:38.518392    8224 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1205 11:28:38.518397    8224 start.go:729] Will try again in 5 seconds ...
I1205 11:28:43.520674    8224 start.go:360] acquireMachinesLock for functional-234000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 11:28:43.521124    8224 start.go:364] duration metric: took 378.833µs to acquireMachinesLock for "functional-234000"
I1205 11:28:43.521263    8224 start.go:96] Skipping create...Using existing machine configuration
I1205 11:28:43.521276    8224 fix.go:54] fixHost starting: 
I1205 11:28:43.521968    8224 fix.go:112] recreateIfNeeded on functional-234000: state=Stopped err=<nil>
W1205 11:28:43.521990    8224 fix.go:138] unexpected machine state, will restart: <nil>
I1205 11:28:43.525573    8224 out.go:177] * Restarting existing qemu2 VM for "functional-234000" ...
I1205 11:28:43.532483    8224 qemu.go:418] Using hvf for hardware acceleration
I1205 11:28:43.532716    8224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:23:cf:73:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/disk.qcow2
I1205 11:28:43.543257    8224 main.go:141] libmachine: STDOUT: 
I1205 11:28:43.543338    8224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1205 11:28:43.543441    8224 fix.go:56] duration metric: took 22.166709ms for fixHost
I1205 11:28:43.543458    8224 start.go:83] releasing machines lock for "functional-234000", held for 22.31625ms
W1205 11:28:43.543626    8224 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-234000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1205 11:28:43.550500    8224 out.go:201] 
W1205 11:28:43.554672    8224 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1205 11:28:43.554695    8224 out.go:270] * 
W1205 11:28:43.557552    8224 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 11:28:43.564471    8224 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1390791169/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | -p download-only-019000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| delete  | -p download-only-019000                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| start   | -o=json --download-only                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | -p download-only-727000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| delete  | -p download-only-727000                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| delete  | -p download-only-019000                                                  | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| delete  | -p download-only-727000                                                  | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| start   | --download-only -p                                                       | binary-mirror-263000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | binary-mirror-263000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:56275                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-263000                                                  | binary-mirror-263000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| addons  | enable dashboard -p                                                      | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | addons-656000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | addons-656000                                                            |                      |         |         |                     |                     |
| start   | -p addons-656000 --wait=true                                             | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-656000                                                         | addons-656000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
| start   | -p nospam-444000 -n=1 --memory=2250 --wait=false                         | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-444000 --log_dir                                                  | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-444000                                                         | nospam-444000        | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-234000 cache add                                              | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | minikube-local-cache-test:functional-234000                              |                      |         |         |                     |                     |
| cache   | functional-234000 cache delete                                           | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | minikube-local-cache-test:functional-234000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
| ssh     | functional-234000 ssh sudo                                               | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-234000                                                        | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-234000 ssh                                                    | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-234000 cache reload                                           | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
| ssh     | functional-234000 ssh                                                    | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:28 PST | 05 Dec 24 11:28 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-234000 kubectl --                                             | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | --context functional-234000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-234000                                                     | functional-234000    | jenkins | v1.34.0 | 05 Dec 24 11:28 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/05 11:28:38
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1205 11:28:38.417508    8224 out.go:345] Setting OutFile to fd 1 ...
I1205 11:28:38.417652    8224 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:28:38.417654    8224 out.go:358] Setting ErrFile to fd 2...
I1205 11:28:38.417655    8224 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:28:38.417759    8224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:28:38.418813    8224 out.go:352] Setting JSON to false
I1205 11:28:38.436555    8224 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5287,"bootTime":1733421631,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W1205 11:28:38.436628    8224 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1205 11:28:38.442734    8224 out.go:177] * [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1205 11:28:38.450901    8224 out.go:177]   - MINIKUBE_LOCATION=20053
I1205 11:28:38.450938    8224 notify.go:220] Checking for updates...
I1205 11:28:38.458807    8224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
I1205 11:28:38.461876    8224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1205 11:28:38.464856    8224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1205 11:28:38.467828    8224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
I1205 11:28:38.470937    8224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1205 11:28:38.474073    8224 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:28:38.474134    8224 driver.go:394] Setting default libvirt URI to qemu:///system
I1205 11:28:38.478875    8224 out.go:177] * Using the qemu2 driver based on existing profile
I1205 11:28:38.485873    8224 start.go:297] selected driver: qemu2
I1205 11:28:38.485877    8224 start.go:901] validating driver "qemu2" against &{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 11:28:38.485922    8224 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1205 11:28:38.488448    8224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1205 11:28:38.488469    8224 cni.go:84] Creating CNI manager for ""
I1205 11:28:38.488497    8224 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1205 11:28:38.488551    8224 start.go:340] cluster config:
{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 11:28:38.493035    8224 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 11:28:38.499836    8224 out.go:177] * Starting "functional-234000" primary control-plane node in "functional-234000" cluster
I1205 11:28:38.503861    8224 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1205 11:28:38.503874    8224 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1205 11:28:38.503887    8224 cache.go:56] Caching tarball of preloaded images
I1205 11:28:38.503963    8224 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1205 11:28:38.503975    8224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1205 11:28:38.504022    8224 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/functional-234000/config.json ...
I1205 11:28:38.504436    8224 start.go:360] acquireMachinesLock for functional-234000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 11:28:38.504484    8224 start.go:364] duration metric: took 43.166µs to acquireMachinesLock for "functional-234000"
I1205 11:28:38.504491    8224 start.go:96] Skipping create...Using existing machine configuration
I1205 11:28:38.504494    8224 fix.go:54] fixHost starting: 
I1205 11:28:38.504615    8224 fix.go:112] recreateIfNeeded on functional-234000: state=Stopped err=<nil>
W1205 11:28:38.504620    8224 fix.go:138] unexpected machine state, will restart: <nil>
I1205 11:28:38.507759    8224 out.go:177] * Restarting existing qemu2 VM for "functional-234000" ...
I1205 11:28:38.515844    8224 qemu.go:418] Using hvf for hardware acceleration
I1205 11:28:38.515887    8224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:23:cf:73:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/disk.qcow2
I1205 11:28:38.518301    8224 main.go:141] libmachine: STDOUT: 
I1205 11:28:38.518314    8224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1205 11:28:38.518346    8224 fix.go:56] duration metric: took 13.851541ms for fixHost
I1205 11:28:38.518350    8224 start.go:83] releasing machines lock for "functional-234000", held for 13.862917ms
W1205 11:28:38.518355    8224 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1205 11:28:38.518392    8224 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1205 11:28:38.518397    8224 start.go:729] Will try again in 5 seconds ...
I1205 11:28:43.520674    8224 start.go:360] acquireMachinesLock for functional-234000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 11:28:43.521124    8224 start.go:364] duration metric: took 378.833µs to acquireMachinesLock for "functional-234000"
I1205 11:28:43.521263    8224 start.go:96] Skipping create...Using existing machine configuration
I1205 11:28:43.521276    8224 fix.go:54] fixHost starting: 
I1205 11:28:43.521968    8224 fix.go:112] recreateIfNeeded on functional-234000: state=Stopped err=<nil>
W1205 11:28:43.521990    8224 fix.go:138] unexpected machine state, will restart: <nil>
I1205 11:28:43.525573    8224 out.go:177] * Restarting existing qemu2 VM for "functional-234000" ...
I1205 11:28:43.532483    8224 qemu.go:418] Using hvf for hardware acceleration
I1205 11:28:43.532716    8224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:23:cf:73:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/functional-234000/disk.qcow2
I1205 11:28:43.543257    8224 main.go:141] libmachine: STDOUT: 
I1205 11:28:43.543338    8224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1205 11:28:43.543441    8224 fix.go:56] duration metric: took 22.166709ms for fixHost
I1205 11:28:43.543458    8224 start.go:83] releasing machines lock for "functional-234000", held for 22.31625ms
W1205 11:28:43.543626    8224 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-234000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1205 11:28:43.550500    8224 out.go:201] 
W1205 11:28:43.554672    8224 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1205 11:28:43.554695    8224 out.go:270] * 
W1205 11:28:43.557552    8224 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 11:28:43.564471    8224 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-234000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-234000 apply -f testdata/invalidsvc.yaml: exit status 1 (29.090541ms)

                                                
                                                
** stderr ** 
	error: context "functional-234000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-234000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-234000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-234000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-234000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-234000 --alsologtostderr -v=1] stderr:
I1205 11:29:19.641956    8424 out.go:345] Setting OutFile to fd 1 ...
I1205 11:29:19.642374    8424 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:19.642377    8424 out.go:358] Setting ErrFile to fd 2...
I1205 11:29:19.642380    8424 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:19.642553    8424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:29:19.642831    8424 mustload.go:65] Loading cluster: functional-234000
I1205 11:29:19.643055    8424 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:29:19.647543    8424 out.go:177] * The control-plane node functional-234000 host is not running: state=Stopped
I1205 11:29:19.651490    8424 out.go:177]   To start a cluster, run: "minikube start -p functional-234000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (46.268791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 status: exit status 7 (79.090584ms)

                                                
                                                
-- stdout --
	functional-234000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-234000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (36.500625ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-234000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 status -o json: exit status 7 (34.407333ms)

                                                
                                                
-- stdout --
	{"Name":"functional-234000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-234000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (34.206625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-234000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-234000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.176417ms)

                                                
                                                
** stderr ** 
	error: context "functional-234000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-234000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-234000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-234000 describe po hello-node-connect: exit status 1 (27.0925ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-234000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-234000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-234000 logs -l app=hello-node-connect: exit status 1 (26.906583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-234000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-234000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-234000 describe svc hello-node-connect: exit status 1 (26.934375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-234000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (34.646083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-234000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (37.236708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "echo hello": exit status 83 (56.643291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"\n"*. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "cat /etc/hostname": exit status 83 (47.954542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-234000"- but got *"* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"\n"*. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (34.365458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (58.81075ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-234000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.223834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-234000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-234000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cp functional-234000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2968092185/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 cp functional-234000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2968092185/001/cp-test.txt: exit status 83 (47.250416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-234000 cp functional-234000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2968092185/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.798208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2968092185/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (51.846333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-234000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (55.760708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-234000 ssh -n functional-234000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-234000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-234000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7922/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/test/nested/copy/7922/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/test/nested/copy/7922/hosts": exit status 83 (49.384958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/test/nested/copy/7922/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-234000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-234000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (33.74175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7922.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/ssl/certs/7922.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/ssl/certs/7922.pem": exit status 83 (44.789291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7922.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"sudo cat /etc/ssl/certs/7922.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7922.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-234000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-234000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7922.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /usr/share/ca-certificates/7922.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /usr/share/ca-certificates/7922.pem": exit status 83 (45.595583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7922.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"sudo cat /usr/share/ca-certificates/7922.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7922.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-234000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-234000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (44.602959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-234000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-234000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/79222.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/ssl/certs/79222.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/ssl/certs/79222.pem": exit status 83 (44.545834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/79222.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"sudo cat /etc/ssl/certs/79222.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/79222.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-234000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-234000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/79222.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /usr/share/ca-certificates/79222.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /usr/share/ca-certificates/79222.pem": exit status 83 (45.599625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/79222.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"sudo cat /usr/share/ca-certificates/79222.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/79222.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-234000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-234000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.768708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-234000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-234000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-234000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (33.336625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-234000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-234000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.227417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-234000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-234000 -n functional-234000: exit status 7 (33.24725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo systemctl is-active crio": exit status 83 (45.400334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1205 11:28:44.258936    8276 out.go:345] Setting OutFile to fd 1 ...
I1205 11:28:44.259142    8276 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:28:44.259147    8276 out.go:358] Setting ErrFile to fd 2...
I1205 11:28:44.259149    8276 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:28:44.259274    8276 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:28:44.259532    8276 mustload.go:65] Loading cluster: functional-234000
I1205 11:28:44.259780    8276 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:28:44.264526    8276 out.go:177] * The control-plane node functional-234000 host is not running: state=Stopped
I1205 11:28:44.276513    8276 out.go:177]   To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
stdout: * The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 8277: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-234000": client config: context "functional-234000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-234000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-234000 get svc nginx-svc: exit status 1 (70.394458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-234000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-234000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-234000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-234000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.821833ms)

                                                
                                                
** stderr ** 
	error: context "functional-234000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-234000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 service list: exit status 83 (45.8245ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-234000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 service list -o json: exit status 83 (45.869625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-234000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 service --namespace=default --https --url hello-node: exit status 83 (44.788041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-234000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 service hello-node --url --format={{.IP}}: exit status 83 (46.739083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-234000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 service hello-node --url: exit status 83 (46.918375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-234000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:1569: failed to parse "* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"": parse "* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 version -o=json --components: exit status 83 (45.846958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-234000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-234000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-234000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-234000 image ls --format short --alsologtostderr:
I1205 11:29:24.827750    8543 out.go:345] Setting OutFile to fd 1 ...
I1205 11:29:24.827934    8543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:24.827938    8543 out.go:358] Setting ErrFile to fd 2...
I1205 11:29:24.827940    8543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:24.828078    8543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:29:24.828524    8543 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:29:24.828583    8543 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-234000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-234000 image ls --format table --alsologtostderr:
I1205 11:29:25.072682    8555 out.go:345] Setting OutFile to fd 1 ...
I1205 11:29:25.072871    8555 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:25.072874    8555 out.go:358] Setting ErrFile to fd 2...
I1205 11:29:25.072877    8555 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:25.072995    8555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:29:25.073421    8555 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:29:25.073487    8555 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-234000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-234000 image ls --format json --alsologtostderr:
I1205 11:29:25.032839    8553 out.go:345] Setting OutFile to fd 1 ...
I1205 11:29:25.033034    8553 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:25.033037    8553 out.go:358] Setting ErrFile to fd 2...
I1205 11:29:25.033039    8553 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:25.033169    8553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:29:25.033623    8553 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:29:25.033687    8553 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-234000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-234000 image ls --format yaml --alsologtostderr:
I1205 11:29:24.867302    8545 out.go:345] Setting OutFile to fd 1 ...
I1205 11:29:24.867474    8545 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:24.867477    8545 out.go:358] Setting ErrFile to fd 2...
I1205 11:29:24.867479    8545 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:24.867602    8545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:29:24.868038    8545 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:29:24.868097    8545 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh pgrep buildkitd: exit status 83 (44.712542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image build -t localhost/my-image:functional-234000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-234000 image build -t localhost/my-image:functional-234000 testdata/build --alsologtostderr:
I1205 11:29:24.951924    8549 out.go:345] Setting OutFile to fd 1 ...
I1205 11:29:24.952529    8549 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:24.952532    8549 out.go:358] Setting ErrFile to fd 2...
I1205 11:29:24.952535    8549 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:29:24.952693    8549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:29:24.953159    8549 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:29:24.953668    8549 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:29:24.953916    8549 build_images.go:133] succeeded building to: 
I1205 11:29:24.953920    8549 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls
functional_test.go:446: expected "localhost/my-image:functional-234000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image load --daemon kicbase/echo-server:functional-234000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-234000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image load --daemon kicbase/echo-server:functional-234000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-234000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-234000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image load --daemon kicbase/echo-server:functional-234000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-234000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image save kicbase/echo-server:functional-234000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-234000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-234000 docker-env) && out/minikube-darwin-arm64 status -p functional-234000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-234000 docker-env) && out/minikube-darwin-arm64 status -p functional-234000": exit status 1 (48.269ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2: exit status 83 (41.951041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:29:25.110973    8558 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:29:25.111854    8558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:25.111857    8558 out.go:358] Setting ErrFile to fd 2...
	I1205 11:29:25.111860    8558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:25.111977    8558 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:29:25.112161    8558 mustload.go:65] Loading cluster: functional-234000
	I1205 11:29:25.112365    8558 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:29:25.116884    8558 out.go:177] * The control-plane node functional-234000 host is not running: state=Stopped
	I1205 11:29:25.118189    8558 out.go:177]   To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2: exit status 83 (45.836375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:29:25.201970    8562 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:29:25.202138    8562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:25.202141    8562 out.go:358] Setting ErrFile to fd 2...
	I1205 11:29:25.202144    8562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:25.202265    8562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:29:25.202474    8562 mustload.go:65] Loading cluster: functional-234000
	I1205 11:29:25.202672    8562 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:29:25.206922    8562 out.go:177] * The control-plane node functional-234000 host is not running: state=Stopped
	I1205 11:29:25.210861    8562 out.go:177]   To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"\n", want=*"context has been updated"*
I1205 11:29:35.321515    7922 retry.go:31] will retry after 26.990119623s: Temporary Error: Get "http:": http: no Host in request URL
I1205 11:30:02.313739    7922 retry.go:31] will retry after 35.669808162s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2: exit status 83 (46.490625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:29:25.154088    8560 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:29:25.154253    8560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:25.154256    8560 out.go:358] Setting ErrFile to fd 2...
	I1205 11:29:25.154259    8560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:25.154388    8560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:29:25.154585    8560 mustload.go:65] Loading cluster: functional-234000
	I1205 11:29:25.154778    8560 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:29:25.159857    8560 out.go:177] * The control-plane node functional-234000 host is not running: state=Stopped
	I1205 11:29:25.163863    8560 out.go:177]   To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-234000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-234000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-234000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1205 11:30:38.072144    7922 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.026398167s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 10 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1205 11:31:03.208816    7922 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:31:03.210108    7922 retry.go:31] will retry after 2.058103581s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: write udp 192.168.105.1:54162->10.96.0.10:53: write: no route to host
I1205 11:31:05.272141    7922 retry.go:31] will retry after 4.11553457s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: write udp 192.168.105.1:52450->10.96.0.10:53: write: no route to host
I1205 11:31:09.391641    7922 retry.go:31] will retry after 8.814205079s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: write udp 192.168.105.1:57677->10.96.0.10:53: write: no route to host
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-644000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-644000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.051411583s)

                                                
                                                
-- stdout --
	* [ha-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-644000" primary control-plane node in "ha-644000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-644000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:31:28.609097    8882 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:31:28.609257    8882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:31:28.609260    8882 out.go:358] Setting ErrFile to fd 2...
	I1205 11:31:28.609262    8882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:31:28.609400    8882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:31:28.610544    8882 out.go:352] Setting JSON to false
	I1205 11:31:28.628512    8882 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5457,"bootTime":1733421631,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:31:28.628590    8882 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:31:28.632779    8882 out.go:177] * [ha-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:31:28.640796    8882 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:31:28.640846    8882 notify.go:220] Checking for updates...
	I1205 11:31:28.646755    8882 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:31:28.649721    8882 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:31:28.652683    8882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:31:28.655693    8882 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:31:28.658767    8882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:31:28.661942    8882 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:31:28.665683    8882 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:31:28.672747    8882 start.go:297] selected driver: qemu2
	I1205 11:31:28.672754    8882 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:31:28.672762    8882 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:31:28.675222    8882 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:31:28.678650    8882 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:31:28.682807    8882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:31:28.682823    8882 cni.go:84] Creating CNI manager for ""
	I1205 11:31:28.682843    8882 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 11:31:28.682853    8882 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 11:31:28.682906    8882 start.go:340] cluster config:
	{Name:ha-644000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:31:28.687384    8882 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:31:28.695717    8882 out.go:177] * Starting "ha-644000" primary control-plane node in "ha-644000" cluster
	I1205 11:31:28.699709    8882 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:31:28.699725    8882 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:31:28.699738    8882 cache.go:56] Caching tarball of preloaded images
	I1205 11:31:28.699881    8882 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:31:28.699903    8882 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:31:28.700111    8882 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/ha-644000/config.json ...
	I1205 11:31:28.700127    8882 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/ha-644000/config.json: {Name:mk8d42294b2f3b407b2f56ed2eb37cd9f070c11f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:31:28.700557    8882 start.go:360] acquireMachinesLock for ha-644000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:31:28.700612    8882 start.go:364] duration metric: took 48.709µs to acquireMachinesLock for "ha-644000"
	I1205 11:31:28.700623    8882 start.go:93] Provisioning new machine with config: &{Name:ha-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:31:28.700666    8882 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:31:28.708718    8882 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:31:28.726008    8882 start.go:159] libmachine.API.Create for "ha-644000" (driver="qemu2")
	I1205 11:31:28.726032    8882 client.go:168] LocalClient.Create starting
	I1205 11:31:28.726109    8882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:31:28.726145    8882 main.go:141] libmachine: Decoding PEM data...
	I1205 11:31:28.726158    8882 main.go:141] libmachine: Parsing certificate...
	I1205 11:31:28.726194    8882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:31:28.726223    8882 main.go:141] libmachine: Decoding PEM data...
	I1205 11:31:28.726233    8882 main.go:141] libmachine: Parsing certificate...
	I1205 11:31:28.726571    8882 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:31:28.889127    8882 main.go:141] libmachine: Creating SSH key...
	I1205 11:31:29.154195    8882 main.go:141] libmachine: Creating Disk image...
	I1205 11:31:29.154209    8882 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:31:29.154490    8882 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:31:29.165372    8882 main.go:141] libmachine: STDOUT: 
	I1205 11:31:29.165394    8882 main.go:141] libmachine: STDERR: 
	I1205 11:31:29.165459    8882 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2 +20000M
	I1205 11:31:29.174142    8882 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:31:29.174158    8882 main.go:141] libmachine: STDERR: 
	I1205 11:31:29.174172    8882 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:31:29.174177    8882 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:31:29.174186    8882 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:31:29.174217    8882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2d:ca:55:55:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:31:29.176117    8882 main.go:141] libmachine: STDOUT: 
	I1205 11:31:29.176137    8882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:31:29.176159    8882 client.go:171] duration metric: took 450.124ms to LocalClient.Create
	I1205 11:31:31.178322    8882 start.go:128] duration metric: took 2.4776575s to createHost
	I1205 11:31:31.178374    8882 start.go:83] releasing machines lock for "ha-644000", held for 2.4777755s
	W1205 11:31:31.178435    8882 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:31:31.191664    8882 out.go:177] * Deleting "ha-644000" in qemu2 ...
	W1205 11:31:31.220640    8882 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:31:31.220665    8882 start.go:729] Will try again in 5 seconds ...
	I1205 11:31:36.222818    8882 start.go:360] acquireMachinesLock for ha-644000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:31:36.223446    8882 start.go:364] duration metric: took 524.041µs to acquireMachinesLock for "ha-644000"
	I1205 11:31:36.223591    8882 start.go:93] Provisioning new machine with config: &{Name:ha-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:31:36.223997    8882 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:31:36.233697    8882 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:31:36.285699    8882 start.go:159] libmachine.API.Create for "ha-644000" (driver="qemu2")
	I1205 11:31:36.285752    8882 client.go:168] LocalClient.Create starting
	I1205 11:31:36.285898    8882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:31:36.285976    8882 main.go:141] libmachine: Decoding PEM data...
	I1205 11:31:36.285995    8882 main.go:141] libmachine: Parsing certificate...
	I1205 11:31:36.286077    8882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:31:36.286137    8882 main.go:141] libmachine: Decoding PEM data...
	I1205 11:31:36.286150    8882 main.go:141] libmachine: Parsing certificate...
	I1205 11:31:36.286748    8882 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:31:36.460008    8882 main.go:141] libmachine: Creating SSH key...
	I1205 11:31:36.559103    8882 main.go:141] libmachine: Creating Disk image...
	I1205 11:31:36.559112    8882 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:31:36.559322    8882 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:31:36.569532    8882 main.go:141] libmachine: STDOUT: 
	I1205 11:31:36.569548    8882 main.go:141] libmachine: STDERR: 
	I1205 11:31:36.569602    8882 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2 +20000M
	I1205 11:31:36.578363    8882 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:31:36.578378    8882 main.go:141] libmachine: STDERR: 
	I1205 11:31:36.578388    8882 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:31:36.578392    8882 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:31:36.578407    8882 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:31:36.578442    8882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bb:a7:87:b6:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:31:36.580292    8882 main.go:141] libmachine: STDOUT: 
	I1205 11:31:36.580304    8882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:31:36.580316    8882 client.go:171] duration metric: took 294.560708ms to LocalClient.Create
	I1205 11:31:38.582477    8882 start.go:128] duration metric: took 2.358444542s to createHost
	I1205 11:31:38.582541    8882 start.go:83] releasing machines lock for "ha-644000", held for 2.359091375s
	W1205 11:31:38.583009    8882 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:31:38.590517    8882 out.go:201] 
	W1205 11:31:38.599828    8882 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:31:38.599857    8882 out.go:270] * 
	* 
	W1205 11:31:38.601334    8882 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:31:38.610155    8882 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-644000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (69.482416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (78.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (63.997708ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-644000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- rollout status deployment/busybox: exit status 1 (62.77975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.830875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:31:38.884464    7922 retry.go:31] will retry after 1.01723565s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.986834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:31:40.013022    7922 retry.go:31] will retry after 1.090536387s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.185416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:31:41.213206    7922 retry.go:31] will retry after 1.520913968s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.432041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:31:42.844851    7922 retry.go:31] will retry after 3.126329214s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.508834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:31:46.083066    7922 retry.go:31] will retry after 2.569396293s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (115.636583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:31:48.770470    7922 retry.go:31] will retry after 5.899623116s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.97625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:31:54.780382    7922 retry.go:31] will retry after 7.656753259s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.984542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:32:02.550585    7922 retry.go:31] will retry after 13.575645979s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.26175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:32:16.238768    7922 retry.go:31] will retry after 15.335283435s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.101333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:32:31.685493    7922 retry.go:31] will retry after 24.645043354s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.1225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.362458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.789834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.041833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.959167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (33.36775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (78.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.297875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-644000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.499583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-644000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-644000 -v=7 --alsologtostderr: exit status 83 (52.409542ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-644000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-644000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:32:56.848817    8972 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:32:56.849232    8972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:56.849236    8972 out.go:358] Setting ErrFile to fd 2...
	I1205 11:32:56.849238    8972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:56.849404    8972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:32:56.849656    8972 mustload.go:65] Loading cluster: ha-644000
	I1205 11:32:56.849872    8972 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:32:56.855625    8972 out.go:177] * The control-plane node ha-644000 host is not running: state=Stopped
	I1205 11:32:56.864529    8972 out.go:177]   To start a cluster, run: "minikube start -p ha-644000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-644000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.407042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-644000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-644000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.787958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-644000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-644000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-644000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.821709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-644000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-644000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.789959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status --output json -v=7 --alsologtostderr: exit status 7 (34.863834ms)

                                                
                                                
-- stdout --
	{"Name":"ha-644000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:32:57.085919    8984 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:32:57.086070    8984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.086073    8984 out.go:358] Setting ErrFile to fd 2...
	I1205 11:32:57.086076    8984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.086218    8984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:32:57.086354    8984 out.go:352] Setting JSON to true
	I1205 11:32:57.086364    8984 mustload.go:65] Loading cluster: ha-644000
	I1205 11:32:57.086430    8984 notify.go:220] Checking for updates...
	I1205 11:32:57.086583    8984 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:32:57.086591    8984 status.go:174] checking status of ha-644000 ...
	I1205 11:32:57.086844    8984 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:32:57.086848    8984 status.go:384] host is not running, skipping remaining checks
	I1205 11:32:57.086850    8984 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-644000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.426625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 node stop m02 -v=7 --alsologtostderr: exit status 85 (52.638541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:32:57.155981    8988 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:32:57.156478    8988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.156482    8988 out.go:358] Setting ErrFile to fd 2...
	I1205 11:32:57.156484    8988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.156665    8988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:32:57.156915    8988 mustload.go:65] Loading cluster: ha-644000
	I1205 11:32:57.157149    8988 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:32:57.161735    8988 out.go:201] 
	W1205 11:32:57.164689    8988 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1205 11:32:57.164694    8988 out.go:270] * 
	* 
	W1205 11:32:57.166537    8988 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:32:57.170630    8988 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-644000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (34.10875ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:32:57.207950    8990 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:32:57.208143    8990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.208146    8990 out.go:358] Setting ErrFile to fd 2...
	I1205 11:32:57.208149    8990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.208282    8990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:32:57.208399    8990 out.go:352] Setting JSON to false
	I1205 11:32:57.208410    8990 mustload.go:65] Loading cluster: ha-644000
	I1205 11:32:57.208472    8990 notify.go:220] Checking for updates...
	I1205 11:32:57.208624    8990 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:32:57.208633    8990 status.go:174] checking status of ha-644000 ...
	I1205 11:32:57.208876    8990 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:32:57.208880    8990 status.go:384] host is not running, skipping remaining checks
	I1205 11:32:57.208882    8990 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr": ha-644000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr": ha-644000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr": ha-644000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr": ha-644000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-644000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (33.99975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 node start m02 -v=7 --alsologtostderr: exit status 85 (49.807042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:32:57.362686    8999 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:32:57.363136    8999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.363140    8999 out.go:358] Setting ErrFile to fd 2...
	I1205 11:32:57.363142    8999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.363309    8999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:32:57.363538    8999 mustload.go:65] Loading cluster: ha-644000
	I1205 11:32:57.363751    8999 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:32:57.367765    8999 out.go:201] 
	W1205 11:32:57.368964    8999 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1205 11:32:57.368973    8999 out.go:270] * 
	* 
	W1205 11:32:57.370647    8999 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:32:57.374701    8999 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1205 11:32:57.362686    8999 out.go:345] Setting OutFile to fd 1 ...
I1205 11:32:57.363136    8999 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:32:57.363140    8999 out.go:358] Setting ErrFile to fd 2...
I1205 11:32:57.363142    8999 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:32:57.363309    8999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:32:57.363538    8999 mustload.go:65] Loading cluster: ha-644000
I1205 11:32:57.363751    8999 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:32:57.367765    8999 out.go:201] 
W1205 11:32:57.368964    8999 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1205 11:32:57.368973    8999 out.go:270] * 
* 
W1205 11:32:57.370647    8999 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 11:32:57.374701    8999 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-644000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (34.445417ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:32:57.412429    9001 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:32:57.412632    9001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.412635    9001 out.go:358] Setting ErrFile to fd 2...
	I1205 11:32:57.412637    9001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:57.412760    9001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:32:57.412894    9001 out.go:352] Setting JSON to false
	I1205 11:32:57.412910    9001 mustload.go:65] Loading cluster: ha-644000
	I1205 11:32:57.412956    9001 notify.go:220] Checking for updates...
	I1205 11:32:57.413131    9001 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:32:57.413138    9001 status.go:174] checking status of ha-644000 ...
	I1205 11:32:57.413371    9001 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:32:57.413374    9001 status.go:384] host is not running, skipping remaining checks
	I1205 11:32:57.413376    9001 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:32:57.414306    7922 retry.go:31] will retry after 822.642792ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (81.169ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:32:58.318284    9003 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:32:58.318512    9003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:58.318516    9003 out.go:358] Setting ErrFile to fd 2...
	I1205 11:32:58.318519    9003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:32:58.318687    9003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:32:58.318843    9003 out.go:352] Setting JSON to false
	I1205 11:32:58.318855    9003 mustload.go:65] Loading cluster: ha-644000
	I1205 11:32:58.318895    9003 notify.go:220] Checking for updates...
	I1205 11:32:58.319127    9003 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:32:58.319136    9003 status.go:174] checking status of ha-644000 ...
	I1205 11:32:58.319451    9003 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:32:58.319456    9003 status.go:384] host is not running, skipping remaining checks
	I1205 11:32:58.319458    9003 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:32:58.320496    7922 retry.go:31] will retry after 2.09189619s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (79.956333ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:00.492608    9005 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:00.492805    9005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:00.492809    9005 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:00.492812    9005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:00.492963    9005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:00.493131    9005 out.go:352] Setting JSON to false
	I1205 11:33:00.493142    9005 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:00.493178    9005 notify.go:220] Checking for updates...
	I1205 11:33:00.493411    9005 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:00.493420    9005 status.go:174] checking status of ha-644000 ...
	I1205 11:33:00.493720    9005 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:33:00.493725    9005 status.go:384] host is not running, skipping remaining checks
	I1205 11:33:00.493727    9005 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:33:00.494728    7922 retry.go:31] will retry after 2.987479056s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (78.89625ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:03.561094    9007 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:03.561327    9007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:03.561331    9007 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:03.561335    9007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:03.561542    9007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:03.561717    9007 out.go:352] Setting JSON to false
	I1205 11:33:03.561730    9007 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:03.561778    9007 notify.go:220] Checking for updates...
	I1205 11:33:03.562002    9007 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:03.562012    9007 status.go:174] checking status of ha-644000 ...
	I1205 11:33:03.562335    9007 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:33:03.562340    9007 status.go:384] host is not running, skipping remaining checks
	I1205 11:33:03.562342    9007 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:33:03.563410    7922 retry.go:31] will retry after 4.985815853s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (79.606125ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:08.628994    9009 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:08.629244    9009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:08.629248    9009 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:08.629251    9009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:08.629411    9009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:08.629568    9009 out.go:352] Setting JSON to false
	I1205 11:33:08.629580    9009 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:08.629612    9009 notify.go:220] Checking for updates...
	I1205 11:33:08.629828    9009 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:08.629840    9009 status.go:174] checking status of ha-644000 ...
	I1205 11:33:08.630146    9009 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:33:08.630150    9009 status.go:384] host is not running, skipping remaining checks
	I1205 11:33:08.630153    9009 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:33:08.631161    7922 retry.go:31] will retry after 5.626274293s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (80.163208ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:14.337854    9011 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:14.338061    9011 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:14.338065    9011 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:14.338069    9011 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:14.338210    9011 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:14.338355    9011 out.go:352] Setting JSON to false
	I1205 11:33:14.338366    9011 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:14.338396    9011 notify.go:220] Checking for updates...
	I1205 11:33:14.338601    9011 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:14.338610    9011 status.go:174] checking status of ha-644000 ...
	I1205 11:33:14.338906    9011 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:33:14.338911    9011 status.go:384] host is not running, skipping remaining checks
	I1205 11:33:14.338913    9011 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:33:14.339925    7922 retry.go:31] will retry after 11.376043016s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (77.3795ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:25.793592    9013 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:25.793836    9013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:25.793840    9013 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:25.793843    9013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:25.794004    9013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:25.794149    9013 out.go:352] Setting JSON to false
	I1205 11:33:25.794161    9013 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:25.794198    9013 notify.go:220] Checking for updates...
	I1205 11:33:25.794408    9013 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:25.794417    9013 status.go:174] checking status of ha-644000 ...
	I1205 11:33:25.794714    9013 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:33:25.794718    9013 status.go:384] host is not running, skipping remaining checks
	I1205 11:33:25.794721    9013 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:33:25.795754    7922 retry.go:31] will retry after 11.383712511s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (79.008791ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:37.258618    9015 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:37.258843    9015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:37.258847    9015 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:37.258850    9015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:37.259018    9015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:37.259180    9015 out.go:352] Setting JSON to false
	I1205 11:33:37.259192    9015 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:37.259237    9015 notify.go:220] Checking for updates...
	I1205 11:33:37.259422    9015 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:37.259431    9015 status.go:174] checking status of ha-644000 ...
	I1205 11:33:37.259722    9015 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:33:37.259726    9015 status.go:384] host is not running, skipping remaining checks
	I1205 11:33:37.259729    9015 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:33:37.260765    7922 retry.go:31] will retry after 12.874370161s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (81.500041ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:50.216633    9024 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:50.216851    9024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:50.216855    9024 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:50.216858    9024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:50.217057    9024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:50.217218    9024 out.go:352] Setting JSON to false
	I1205 11:33:50.217233    9024 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:50.217267    9024 notify.go:220] Checking for updates...
	I1205 11:33:50.217521    9024 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:50.217530    9024 status.go:174] checking status of ha-644000 ...
	I1205 11:33:50.217852    9024 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:33:50.217856    9024 status.go:384] host is not running, skipping remaining checks
	I1205 11:33:50.217859    9024 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (36.376292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-644000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-644000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.241416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-644000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-644000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-644000 -v=7 --alsologtostderr: (2.027821292s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-644000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-644000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.2207495s)

                                                
                                                
-- stdout --
	* [ha-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-644000" primary control-plane node in "ha-644000" cluster
	* Restarting existing qemu2 VM for "ha-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:52.475480    9047 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:52.475674    9047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:52.475678    9047 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:52.475681    9047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:52.475801    9047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:52.477028    9047 out.go:352] Setting JSON to false
	I1205 11:33:52.497398    9047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5601,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:33:52.497461    9047 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:33:52.501053    9047 out.go:177] * [ha-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:33:52.508892    9047 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:33:52.508935    9047 notify.go:220] Checking for updates...
	I1205 11:33:52.516007    9047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:33:52.517426    9047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:33:52.519917    9047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:33:52.523029    9047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:33:52.526020    9047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:33:52.529250    9047 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:52.529305    9047 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:33:52.533990    9047 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:33:52.540940    9047 start.go:297] selected driver: qemu2
	I1205 11:33:52.540947    9047 start.go:901] validating driver "qemu2" against &{Name:ha-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:33:52.541007    9047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:33:52.543491    9047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:33:52.543522    9047 cni.go:84] Creating CNI manager for ""
	I1205 11:33:52.543549    9047 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 11:33:52.543620    9047 start.go:340] cluster config:
	{Name:ha-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-644000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:33:52.548188    9047 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:33:52.553940    9047 out.go:177] * Starting "ha-644000" primary control-plane node in "ha-644000" cluster
	I1205 11:33:52.557903    9047 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:33:52.557917    9047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:33:52.557926    9047 cache.go:56] Caching tarball of preloaded images
	I1205 11:33:52.557994    9047 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:33:52.558000    9047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:33:52.558058    9047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/ha-644000/config.json ...
	I1205 11:33:52.558468    9047 start.go:360] acquireMachinesLock for ha-644000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:33:52.558517    9047 start.go:364] duration metric: took 43.292µs to acquireMachinesLock for "ha-644000"
	I1205 11:33:52.558526    9047 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:33:52.558531    9047 fix.go:54] fixHost starting: 
	I1205 11:33:52.558659    9047 fix.go:112] recreateIfNeeded on ha-644000: state=Stopped err=<nil>
	W1205 11:33:52.558669    9047 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:33:52.559980    9047 out.go:177] * Restarting existing qemu2 VM for "ha-644000" ...
	I1205 11:33:52.567983    9047 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:33:52.568033    9047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bb:a7:87:b6:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:33:52.570292    9047 main.go:141] libmachine: STDOUT: 
	I1205 11:33:52.570314    9047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:33:52.570346    9047 fix.go:56] duration metric: took 11.814041ms for fixHost
	I1205 11:33:52.570350    9047 start.go:83] releasing machines lock for "ha-644000", held for 11.828416ms
	W1205 11:33:52.570356    9047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:33:52.570390    9047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:33:52.570395    9047 start.go:729] Will try again in 5 seconds ...
	I1205 11:33:57.572520    9047 start.go:360] acquireMachinesLock for ha-644000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:33:57.573009    9047 start.go:364] duration metric: took 324.166µs to acquireMachinesLock for "ha-644000"
	I1205 11:33:57.573106    9047 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:33:57.573125    9047 fix.go:54] fixHost starting: 
	I1205 11:33:57.573789    9047 fix.go:112] recreateIfNeeded on ha-644000: state=Stopped err=<nil>
	W1205 11:33:57.573814    9047 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:33:57.577264    9047 out.go:177] * Restarting existing qemu2 VM for "ha-644000" ...
	I1205 11:33:57.584234    9047 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:33:57.584455    9047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bb:a7:87:b6:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:33:57.592186    9047 main.go:141] libmachine: STDOUT: 
	I1205 11:33:57.592242    9047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:33:57.592296    9047 fix.go:56] duration metric: took 19.173208ms for fixHost
	I1205 11:33:57.592312    9047 start.go:83] releasing machines lock for "ha-644000", held for 19.278208ms
	W1205 11:33:57.592480    9047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:33:57.599298    9047 out.go:201] 
	W1205 11:33:57.603273    9047 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:33:57.603295    9047 out.go:270] * 
	* 
	W1205 11:33:57.604868    9047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:33:57.613243    9047 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-644000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-644000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (35.950542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 node delete m03 -v=7 --alsologtostderr: exit status 83 (43.781334ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-644000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-644000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:57.765400    9059 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:57.766049    9059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:57.766053    9059 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:57.766056    9059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:57.766191    9059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:57.766397    9059 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:57.766618    9059 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:57.771459    9059 out.go:177] * The control-plane node ha-644000 host is not running: state=Stopped
	I1205 11:33:57.774486    9059 out.go:177]   To start a cluster, run: "minikube start -p ha-644000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-644000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (34.343584ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:33:57.809310    9061 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:33:57.809483    9061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:57.809487    9061 out.go:358] Setting ErrFile to fd 2...
	I1205 11:33:57.809489    9061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:33:57.809609    9061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:33:57.809732    9061 out.go:352] Setting JSON to false
	I1205 11:33:57.809742    9061 mustload.go:65] Loading cluster: ha-644000
	I1205 11:33:57.809800    9061 notify.go:220] Checking for updates...
	I1205 11:33:57.809928    9061 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:33:57.809942    9061 status.go:174] checking status of ha-644000 ...
	I1205 11:33:57.810188    9061 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:33:57.810192    9061 status.go:384] host is not running, skipping remaining checks
	I1205 11:33:57.810194    9061 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (35.199083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-644000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.675958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-644000 stop -v=7 --alsologtostderr: (3.440351292s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (72.872458ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:34:01.444839    9088 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:34:01.445043    9088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:34:01.445048    9088 out.go:358] Setting ErrFile to fd 2...
	I1205 11:34:01.445050    9088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:34:01.445233    9088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:34:01.445387    9088 out.go:352] Setting JSON to false
	I1205 11:34:01.445398    9088 mustload.go:65] Loading cluster: ha-644000
	I1205 11:34:01.445434    9088 notify.go:220] Checking for updates...
	I1205 11:34:01.445648    9088 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:34:01.445657    9088 status.go:174] checking status of ha-644000 ...
	I1205 11:34:01.445948    9088 status.go:371] ha-644000 host status = "Stopped" (err=<nil>)
	I1205 11:34:01.445953    9088 status.go:384] host is not running, skipping remaining checks
	I1205 11:34:01.445955    9088 status.go:176] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr": ha-644000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr": ha-644000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-644000 status -v=7 --alsologtostderr": ha-644000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (35.52525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-644000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-644000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.191019958s)

                                                
                                                
-- stdout --
	* [ha-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-644000" primary control-plane node in "ha-644000" cluster
	* Restarting existing qemu2 VM for "ha-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:34:01.515093    9092 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:34:01.515234    9092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:34:01.515237    9092 out.go:358] Setting ErrFile to fd 2...
	I1205 11:34:01.515240    9092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:34:01.515393    9092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:34:01.516469    9092 out.go:352] Setting JSON to false
	I1205 11:34:01.534205    9092 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5610,"bootTime":1733421631,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:34:01.534272    9092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:34:01.538947    9092 out.go:177] * [ha-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:34:01.547920    9092 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:34:01.547990    9092 notify.go:220] Checking for updates...
	I1205 11:34:01.554882    9092 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:34:01.557770    9092 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:34:01.560886    9092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:34:01.563886    9092 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:34:01.566914    9092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:34:01.570111    9092 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:34:01.570377    9092 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:34:01.574918    9092 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:34:01.581865    9092 start.go:297] selected driver: qemu2
	I1205 11:34:01.581870    9092 start.go:901] validating driver "qemu2" against &{Name:ha-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:34:01.581911    9092 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:34:01.584375    9092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:34:01.584402    9092 cni.go:84] Creating CNI manager for ""
	I1205 11:34:01.584420    9092 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 11:34:01.584463    9092 start.go:340] cluster config:
	{Name:ha-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-644000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:34:01.589050    9092 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:34:01.596854    9092 out.go:177] * Starting "ha-644000" primary control-plane node in "ha-644000" cluster
	I1205 11:34:01.600832    9092 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:34:01.600845    9092 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:34:01.600855    9092 cache.go:56] Caching tarball of preloaded images
	I1205 11:34:01.600902    9092 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:34:01.600908    9092 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:34:01.600960    9092 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/ha-644000/config.json ...
	I1205 11:34:01.601409    9092 start.go:360] acquireMachinesLock for ha-644000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:34:01.601441    9092 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "ha-644000"
	I1205 11:34:01.601449    9092 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:34:01.601455    9092 fix.go:54] fixHost starting: 
	I1205 11:34:01.601579    9092 fix.go:112] recreateIfNeeded on ha-644000: state=Stopped err=<nil>
	W1205 11:34:01.601584    9092 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:34:01.608873    9092 out.go:177] * Restarting existing qemu2 VM for "ha-644000" ...
	I1205 11:34:01.612864    9092 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:34:01.612912    9092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bb:a7:87:b6:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:34:01.615162    9092 main.go:141] libmachine: STDOUT: 
	I1205 11:34:01.615184    9092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:34:01.615216    9092 fix.go:56] duration metric: took 13.760625ms for fixHost
	I1205 11:34:01.615222    9092 start.go:83] releasing machines lock for "ha-644000", held for 13.776833ms
	W1205 11:34:01.615228    9092 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:34:01.615267    9092 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:34:01.615271    9092 start.go:729] Will try again in 5 seconds ...
	I1205 11:34:06.617409    9092 start.go:360] acquireMachinesLock for ha-644000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:34:06.617783    9092 start.go:364] duration metric: took 293.458µs to acquireMachinesLock for "ha-644000"
	I1205 11:34:06.617904    9092 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:34:06.617922    9092 fix.go:54] fixHost starting: 
	I1205 11:34:06.618586    9092 fix.go:112] recreateIfNeeded on ha-644000: state=Stopped err=<nil>
	W1205 11:34:06.618610    9092 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:34:06.622960    9092 out.go:177] * Restarting existing qemu2 VM for "ha-644000" ...
	I1205 11:34:06.627095    9092 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:34:06.627465    9092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bb:a7:87:b6:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/ha-644000/disk.qcow2
	I1205 11:34:06.637281    9092 main.go:141] libmachine: STDOUT: 
	I1205 11:34:06.637331    9092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:34:06.637385    9092 fix.go:56] duration metric: took 19.466584ms for fixHost
	I1205 11:34:06.637405    9092 start.go:83] releasing machines lock for "ha-644000", held for 19.603291ms
	W1205 11:34:06.637616    9092 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:34:06.644967    9092 out.go:201] 
	W1205 11:34:06.649078    9092 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:34:06.649116    9092 out.go:270] * 
	* 
	W1205 11:34:06.651712    9092 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:34:06.660026    9092 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-644000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (77.57ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-644000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.598167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-644000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-644000 --control-plane -v=7 --alsologtostderr: exit status 83 (45.87175ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-644000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-644000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:34:06.874364    9107 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:34:06.874564    9107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:34:06.874567    9107 out.go:358] Setting ErrFile to fd 2...
	I1205 11:34:06.874570    9107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:34:06.874704    9107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:34:06.874940    9107 mustload.go:65] Loading cluster: ha-644000
	I1205 11:34:06.875163    9107 config.go:182] Loaded profile config "ha-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:34:06.879668    9107 out.go:177] * The control-plane node ha-644000 host is not running: state=Stopped
	I1205 11:34:06.882660    9107 out.go:177]   To start a cluster, run: "minikube start -p ha-644000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-644000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (35.13325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-644000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-644000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-644000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-644000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-644000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-644000 -n ha-644000: exit status 7 (34.657458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-608000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-608000 --driver=qemu2 : exit status 80 (10.049867875s)

                                                
                                                
-- stdout --
	* [image-608000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-608000" primary control-plane node in "image-608000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-608000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-608000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-608000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-608000 -n image-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-608000 -n image-608000: exit status 7 (73.996916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-608000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-849000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-849000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.862977417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c260bcae-1615-4eaf-9b0c-38bce7b41434","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c525e29-daa5-410d-9078-d0251030e3a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20053"}}
	{"specversion":"1.0","id":"e7525b57-5c6c-4c39-9ffb-40e18fe18e84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig"}}
	{"specversion":"1.0","id":"968f6a2a-5cf3-43d4-be19-1b88895872fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"13b49b7e-a34e-4219-b03e-8ffe6668f91a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d7d363bb-ee5d-447e-b747-0c523f8c7b26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube"}}
	{"specversion":"1.0","id":"4deeec36-fe53-414b-ac22-1dffc25c08ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b4712975-e598-4a44-8514-5e19fbd0c552","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"44d239dd-403a-4342-aa7c-77ac1b5f7a37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"012ab60e-bb71-4eb5-9082-ac0a5883b20d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-849000\" primary control-plane node in \"json-output-849000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"632fca22-9612-4033-8c28-09112bf8868f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"001faadd-3ec7-4fa3-bc8e-6da1ab957c35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-849000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec2cc61f-5116-40f8-9caa-0fcab309e53b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"3a248817-ae0f-42e6-ab6f-a13c82733b9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"192e8e6d-1fe9-4f87-9d04-c2f22a545f87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-849000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a4903c56-b5d8-4b8e-bc1f-ffeef54c43b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"58239095-4dc8-415a-a694-e39fdf7c3945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-849000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-849000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-849000 --output=json --user=testUser: exit status 83 (85.374083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c90a0c6-244b-4913-b098-32adbe1827e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-849000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"20719283-2cb1-4855-94bf-d63505b78b9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-849000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-849000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-849000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-849000 --output=json --user=testUser: exit status 83 (48.505792ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-849000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-849000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-849000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-849000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-714000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-714000 --driver=qemu2 : exit status 80 (10.02532675s)

                                                
                                                
-- stdout --
	* [first-714000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-714000" primary control-plane node in "first-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-714000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-05 11:34:40.324637 -0800 PST m=+440.296009043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-716000 -n second-716000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-716000 -n second-716000: exit status 85 (85.741958ms)

                                                
                                                
-- stdout --
	* Profile "second-716000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-716000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-716000" host is not running, skipping log retrieval (state="* Profile \"second-716000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-716000\"")
helpers_test.go:175: Cleaning up "second-716000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-716000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-05 11:34:40.525328 -0800 PST m=+440.496700793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-714000 -n first-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-714000 -n first-714000: exit status 7 (34.698833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-714000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-714000
--- FAIL: TestMinikubeProfile (10.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-516000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-516000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.051627458s)

                                                
                                                
-- stdout --
	* [mount-start-1-516000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-516000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-516000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-516000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-516000 -n mount-start-1-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-516000 -n mount-start-1-516000: exit status 7 (73.780875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-681000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-681000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.877391958s)

                                                
                                                
-- stdout --
	* [multinode-681000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-681000" primary control-plane node in "multinode-681000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-681000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:34:50.988841    9252 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:34:50.988991    9252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:34:50.988994    9252 out.go:358] Setting ErrFile to fd 2...
	I1205 11:34:50.988997    9252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:34:50.989134    9252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:34:50.990250    9252 out.go:352] Setting JSON to false
	I1205 11:34:51.008099    9252 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5659,"bootTime":1733421631,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:34:51.008169    9252 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:34:51.012828    9252 out.go:177] * [multinode-681000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:34:51.021727    9252 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:34:51.021788    9252 notify.go:220] Checking for updates...
	I1205 11:34:51.028661    9252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:34:51.031699    9252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:34:51.034730    9252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:34:51.035933    9252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:34:51.038674    9252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:34:51.041923    9252 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:34:51.046589    9252 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:34:51.053698    9252 start.go:297] selected driver: qemu2
	I1205 11:34:51.053706    9252 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:34:51.053713    9252 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:34:51.056268    9252 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:34:51.059750    9252 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:34:51.062750    9252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:34:51.062766    9252 cni.go:84] Creating CNI manager for ""
	I1205 11:34:51.062788    9252 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 11:34:51.062792    9252 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 11:34:51.062823    9252 start.go:340] cluster config:
	{Name:multinode-681000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:34:51.067507    9252 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:34:51.073685    9252 out.go:177] * Starting "multinode-681000" primary control-plane node in "multinode-681000" cluster
	I1205 11:34:51.077684    9252 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:34:51.077700    9252 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:34:51.077713    9252 cache.go:56] Caching tarball of preloaded images
	I1205 11:34:51.077807    9252 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:34:51.077813    9252 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:34:51.078010    9252 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/multinode-681000/config.json ...
	I1205 11:34:51.078022    9252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/multinode-681000/config.json: {Name:mk664217da126529e3f9842f9c58c47ef55aa957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:34:51.078278    9252 start.go:360] acquireMachinesLock for multinode-681000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:34:51.078325    9252 start.go:364] duration metric: took 41.25µs to acquireMachinesLock for "multinode-681000"
	I1205 11:34:51.078337    9252 start.go:93] Provisioning new machine with config: &{Name:multinode-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:34:51.078373    9252 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:34:51.086725    9252 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:34:51.104161    9252 start.go:159] libmachine.API.Create for "multinode-681000" (driver="qemu2")
	I1205 11:34:51.104193    9252 client.go:168] LocalClient.Create starting
	I1205 11:34:51.104262    9252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:34:51.104297    9252 main.go:141] libmachine: Decoding PEM data...
	I1205 11:34:51.104311    9252 main.go:141] libmachine: Parsing certificate...
	I1205 11:34:51.104344    9252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:34:51.104372    9252 main.go:141] libmachine: Decoding PEM data...
	I1205 11:34:51.104380    9252 main.go:141] libmachine: Parsing certificate...
	I1205 11:34:51.104770    9252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:34:51.264404    9252 main.go:141] libmachine: Creating SSH key...
	I1205 11:34:51.371045    9252 main.go:141] libmachine: Creating Disk image...
	I1205 11:34:51.371051    9252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:34:51.371271    9252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:34:51.381561    9252 main.go:141] libmachine: STDOUT: 
	I1205 11:34:51.381584    9252 main.go:141] libmachine: STDERR: 
	I1205 11:34:51.381636    9252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2 +20000M
	I1205 11:34:51.390198    9252 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:34:51.390216    9252 main.go:141] libmachine: STDERR: 
	I1205 11:34:51.390230    9252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:34:51.390234    9252 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:34:51.390244    9252 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:34:51.390274    9252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:84:b3:46:16:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:34:51.392094    9252 main.go:141] libmachine: STDOUT: 
	I1205 11:34:51.392106    9252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:34:51.392133    9252 client.go:171] duration metric: took 287.934917ms to LocalClient.Create
	I1205 11:34:53.394292    9252 start.go:128] duration metric: took 2.315918042s to createHost
	I1205 11:34:53.394355    9252 start.go:83] releasing machines lock for "multinode-681000", held for 2.316040542s
	W1205 11:34:53.394409    9252 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:34:53.407633    9252 out.go:177] * Deleting "multinode-681000" in qemu2 ...
	W1205 11:34:53.434586    9252 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:34:53.434616    9252 start.go:729] Will try again in 5 seconds ...
	I1205 11:34:58.434823    9252 start.go:360] acquireMachinesLock for multinode-681000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:34:58.435496    9252 start.go:364] duration metric: took 465.5µs to acquireMachinesLock for "multinode-681000"
	I1205 11:34:58.435622    9252 start.go:93] Provisioning new machine with config: &{Name:multinode-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:34:58.435953    9252 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:34:58.449689    9252 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:34:58.500171    9252 start.go:159] libmachine.API.Create for "multinode-681000" (driver="qemu2")
	I1205 11:34:58.500231    9252 client.go:168] LocalClient.Create starting
	I1205 11:34:58.500375    9252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:34:58.500453    9252 main.go:141] libmachine: Decoding PEM data...
	I1205 11:34:58.500469    9252 main.go:141] libmachine: Parsing certificate...
	I1205 11:34:58.500528    9252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:34:58.500587    9252 main.go:141] libmachine: Decoding PEM data...
	I1205 11:34:58.500598    9252 main.go:141] libmachine: Parsing certificate...
	I1205 11:34:58.501306    9252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:34:58.672165    9252 main.go:141] libmachine: Creating SSH key...
	I1205 11:34:58.767310    9252 main.go:141] libmachine: Creating Disk image...
	I1205 11:34:58.767315    9252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:34:58.767499    9252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:34:58.777710    9252 main.go:141] libmachine: STDOUT: 
	I1205 11:34:58.777729    9252 main.go:141] libmachine: STDERR: 
	I1205 11:34:58.777798    9252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2 +20000M
	I1205 11:34:58.786341    9252 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:34:58.786354    9252 main.go:141] libmachine: STDERR: 
	I1205 11:34:58.786370    9252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:34:58.786375    9252 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:34:58.786385    9252 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:34:58.786416    9252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:4f:49:0c:b3:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:34:58.788178    9252 main.go:141] libmachine: STDOUT: 
	I1205 11:34:58.788194    9252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:34:58.788210    9252 client.go:171] duration metric: took 287.976708ms to LocalClient.Create
	I1205 11:35:00.790375    9252 start.go:128] duration metric: took 2.354416291s to createHost
	I1205 11:35:00.790456    9252 start.go:83] releasing machines lock for "multinode-681000", held for 2.35494625s
	W1205 11:35:00.790918    9252 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-681000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-681000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:35:00.801574    9252 out.go:201] 
	W1205 11:35:00.805607    9252 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:35:00.805917    9252 out.go:270] * 
	* 
	W1205 11:35:00.808524    9252 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:35:00.818510    9252 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-681000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (72.915166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (81.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (63.786166ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-681000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- rollout status deployment/busybox: exit status 1 (62.053541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.063583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:01.096717    7922 retry.go:31] will retry after 753.093829ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.055375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:01.963252    7922 retry.go:31] will retry after 1.201620487s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.598333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:03.275880    7922 retry.go:31] will retry after 3.150843356s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.5825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:06.537602    7922 retry.go:31] will retry after 1.720436294s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.209459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:08.368624    7922 retry.go:31] will retry after 5.692183857s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.135125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:14.172259    7922 retry.go:31] will retry after 9.788856131s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.329166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:24.070888    7922 retry.go:31] will retry after 9.899130294s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.357542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:34.080821    7922 retry.go:31] will retry after 16.92339958s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.04925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 11:35:51.114485    7922 retry.go:31] will retry after 31.139339308s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.142958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.885958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.174625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.530125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.586625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.031375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (81.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-681000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.499334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.914416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-681000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-681000 -v 3 --alsologtostderr: exit status 83 (50.710833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-681000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:22.777750    9340 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:22.777945    9340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:22.777948    9340 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:22.777950    9340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:22.778074    9340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:22.778325    9340 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:22.778545    9340 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:22.784417    9340 out.go:177] * The control-plane node multinode-681000 host is not running: state=Stopped
	I1205 11:36:22.789396    9340 out.go:177]   To start a cluster, run: "minikube start -p multinode-681000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-681000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.349791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-681000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-681000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.008334ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-681000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-681000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-681000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-681000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-681000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-681000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-681000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (33.81225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status --output json --alsologtostderr: exit status 7 (34.8505ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-681000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:23.012369    9352 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:23.012554    9352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:23.012557    9352 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:23.012560    9352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:23.012699    9352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:23.012827    9352 out.go:352] Setting JSON to true
	I1205 11:36:23.012837    9352 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:23.012893    9352 notify.go:220] Checking for updates...
	I1205 11:36:23.013054    9352 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:23.013062    9352 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:23.013305    9352 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:23.013309    9352 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:23.013310    9352 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-681000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.897208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 node stop m03: exit status 85 (51.208333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-681000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status: exit status 7 (34.523917ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr: exit status 7 (35.018542ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:23.168070    9360 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:23.168266    9360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:23.168269    9360 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:23.168271    9360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:23.168400    9360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:23.168534    9360 out.go:352] Setting JSON to false
	I1205 11:36:23.168544    9360 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:23.168833    9360 notify.go:220] Checking for updates...
	I1205 11:36:23.169259    9360 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:23.169284    9360 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:23.169830    9360 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:23.169836    9360 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:23.169839    9360 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr": multinode-681000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.939792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 node start m03 -v=7 --alsologtostderr: exit status 85 (50.954208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:23.238970    9364 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:23.239415    9364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:23.239419    9364 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:23.239421    9364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:23.239608    9364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:23.239831    9364 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:23.240036    9364 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:23.244089    9364 out.go:201] 
	W1205 11:36:23.247138    9364 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1205 11:36:23.247143    9364 out.go:270] * 
	* 
	W1205 11:36:23.248920    9364 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:36:23.252040    9364 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1205 11:36:23.238970    9364 out.go:345] Setting OutFile to fd 1 ...
I1205 11:36:23.239415    9364 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:36:23.239419    9364 out.go:358] Setting ErrFile to fd 2...
I1205 11:36:23.239421    9364 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 11:36:23.239608    9364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
I1205 11:36:23.239831    9364 mustload.go:65] Loading cluster: multinode-681000
I1205 11:36:23.240036    9364 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 11:36:23.244089    9364 out.go:201] 
W1205 11:36:23.247138    9364 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1205 11:36:23.247143    9364 out.go:270] * 
* 
W1205 11:36:23.248920    9364 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 11:36:23.252040    9364 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-681000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (35.196958ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:23.290552    9366 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:23.290724    9366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:23.290727    9366 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:23.290730    9366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:23.290865    9366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:23.291006    9366 out.go:352] Setting JSON to false
	I1205 11:36:23.291015    9366 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:23.291085    9366 notify.go:220] Checking for updates...
	I1205 11:36:23.291237    9366 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:23.291248    9366 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:23.291478    9366 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:23.291482    9366 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:23.291484    9366 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:36:23.292413    7922 retry.go:31] will retry after 689.602021ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (79.840792ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:24.062053    9368 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:24.062298    9368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:24.062302    9368 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:24.062306    9368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:24.062484    9368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:24.062657    9368 out.go:352] Setting JSON to false
	I1205 11:36:24.062669    9368 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:24.062697    9368 notify.go:220] Checking for updates...
	I1205 11:36:24.062905    9368 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:24.062913    9368 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:24.063202    9368 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:24.063206    9368 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:24.063209    9368 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:36:24.064240    7922 retry.go:31] will retry after 1.221713449s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (78.727375ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:25.364880    9370 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:25.365076    9370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:25.365081    9370 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:25.365084    9370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:25.365266    9370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:25.365425    9370 out.go:352] Setting JSON to false
	I1205 11:36:25.365437    9370 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:25.365492    9370 notify.go:220] Checking for updates...
	I1205 11:36:25.365666    9370 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:25.365675    9370 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:25.365980    9370 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:25.365985    9370 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:25.365987    9370 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:36:25.366989    7922 retry.go:31] will retry after 3.035570077s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (79.515791ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:28.482232    9372 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:28.482435    9372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:28.482439    9372 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:28.482442    9372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:28.482603    9372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:28.482791    9372 out.go:352] Setting JSON to false
	I1205 11:36:28.482801    9372 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:28.482845    9372 notify.go:220] Checking for updates...
	I1205 11:36:28.483064    9372 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:28.483073    9372 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:28.483381    9372 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:28.483386    9372 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:28.483389    9372 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:36:28.484379    7922 retry.go:31] will retry after 4.033338704s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (80.550125ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:32.598377    9374 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:32.598607    9374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:32.598612    9374 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:32.598614    9374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:32.598758    9374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:32.598908    9374 out.go:352] Setting JSON to false
	I1205 11:36:32.598919    9374 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:32.598950    9374 notify.go:220] Checking for updates...
	I1205 11:36:32.599162    9374 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:32.599173    9374 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:32.599486    9374 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:32.599490    9374 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:32.599493    9374 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:36:32.600588    7922 retry.go:31] will retry after 6.47558307s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (80.059084ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:39.156421    9376 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:39.156624    9376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:39.156629    9376 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:39.156631    9376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:39.156808    9376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:39.156962    9376 out.go:352] Setting JSON to false
	I1205 11:36:39.156974    9376 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:39.157030    9376 notify.go:220] Checking for updates...
	I1205 11:36:39.157259    9376 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:39.157270    9376 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:39.157553    9376 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:39.157558    9376 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:39.157561    9376 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:36:39.158575    7922 retry.go:31] will retry after 10.958690713s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (80.116042ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:50.197693    9381 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:50.197899    9381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:50.197903    9381 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:50.197906    9381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:50.198086    9381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:50.198251    9381 out.go:352] Setting JSON to false
	I1205 11:36:50.198263    9381 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:50.198295    9381 notify.go:220] Checking for updates...
	I1205 11:36:50.198528    9381 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:50.198538    9381 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:50.198807    9381 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:50.198811    9381 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:50.198814    9381 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:36:50.199854    7922 retry.go:31] will retry after 7.455084491s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (79.493417ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:36:57.734642    9385 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:36:57.734869    9385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:57.734874    9385 out.go:358] Setting ErrFile to fd 2...
	I1205 11:36:57.734876    9385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:36:57.735064    9385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:36:57.735207    9385 out.go:352] Setting JSON to false
	I1205 11:36:57.735220    9385 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:36:57.735271    9385 notify.go:220] Checking for updates...
	I1205 11:36:57.735465    9385 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:36:57.735474    9385 status.go:174] checking status of multinode-681000 ...
	I1205 11:36:57.735792    9385 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:36:57.735796    9385 status.go:384] host is not running, skipping remaining checks
	I1205 11:36:57.735799    9385 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:36:57.736859    7922 retry.go:31] will retry after 16.191586873s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr: exit status 7 (81.349791ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:37:14.010024    9390 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:37:14.010228    9390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:14.010232    9390 out.go:358] Setting ErrFile to fd 2...
	I1205 11:37:14.010235    9390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:14.010403    9390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:37:14.010547    9390 out.go:352] Setting JSON to false
	I1205 11:37:14.010558    9390 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:37:14.010595    9390 notify.go:220] Checking for updates...
	I1205 11:37:14.010816    9390 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:37:14.010824    9390 status.go:174] checking status of multinode-681000 ...
	I1205 11:37:14.011134    9390 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:37:14.011138    9390 status.go:384] host is not running, skipping remaining checks
	I1205 11:37:14.011140    9390 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-681000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (36.998541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (50.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-681000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-681000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-681000: (3.303805459s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-681000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-681000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.233252459s)

                                                
                                                
-- stdout --
	* [multinode-681000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-681000" primary control-plane node in "multinode-681000" cluster
	* Restarting existing qemu2 VM for "multinode-681000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-681000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:37:17.456880    9414 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:37:17.457092    9414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:17.457096    9414 out.go:358] Setting ErrFile to fd 2...
	I1205 11:37:17.457098    9414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:17.457282    9414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:37:17.458541    9414 out.go:352] Setting JSON to false
	I1205 11:37:17.478256    9414 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5806,"bootTime":1733421631,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:37:17.478327    9414 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:37:17.483576    9414 out.go:177] * [multinode-681000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:37:17.490516    9414 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:37:17.490589    9414 notify.go:220] Checking for updates...
	I1205 11:37:17.496438    9414 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:37:17.499529    9414 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:37:17.502483    9414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:37:17.505581    9414 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:37:17.508462    9414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:37:17.511859    9414 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:37:17.511914    9414 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:37:17.516419    9414 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:37:17.523498    9414 start.go:297] selected driver: qemu2
	I1205 11:37:17.523504    9414 start.go:901] validating driver "qemu2" against &{Name:multinode-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:37:17.523549    9414 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:37:17.526021    9414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:37:17.526044    9414 cni.go:84] Creating CNI manager for ""
	I1205 11:37:17.526069    9414 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 11:37:17.526120    9414 start.go:340] cluster config:
	{Name:multinode-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-681000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:37:17.530759    9414 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:17.538428    9414 out.go:177] * Starting "multinode-681000" primary control-plane node in "multinode-681000" cluster
	I1205 11:37:17.542520    9414 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:37:17.542535    9414 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:37:17.542545    9414 cache.go:56] Caching tarball of preloaded images
	I1205 11:37:17.542611    9414 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:37:17.542616    9414 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:37:17.542662    9414 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/multinode-681000/config.json ...
	I1205 11:37:17.543055    9414 start.go:360] acquireMachinesLock for multinode-681000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:37:17.543102    9414 start.go:364] duration metric: took 40.583µs to acquireMachinesLock for "multinode-681000"
	I1205 11:37:17.543114    9414 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:37:17.543118    9414 fix.go:54] fixHost starting: 
	I1205 11:37:17.543238    9414 fix.go:112] recreateIfNeeded on multinode-681000: state=Stopped err=<nil>
	W1205 11:37:17.543246    9414 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:37:17.546525    9414 out.go:177] * Restarting existing qemu2 VM for "multinode-681000" ...
	I1205 11:37:17.554449    9414 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:37:17.554499    9414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:4f:49:0c:b3:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:37:17.556703    9414 main.go:141] libmachine: STDOUT: 
	I1205 11:37:17.556722    9414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:37:17.556749    9414 fix.go:56] duration metric: took 13.631667ms for fixHost
	I1205 11:37:17.556754    9414 start.go:83] releasing machines lock for "multinode-681000", held for 13.64825ms
	W1205 11:37:17.556759    9414 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:37:17.556788    9414 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:37:17.556793    9414 start.go:729] Will try again in 5 seconds ...
	I1205 11:37:22.559007    9414 start.go:360] acquireMachinesLock for multinode-681000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:37:22.559510    9414 start.go:364] duration metric: took 402.959µs to acquireMachinesLock for "multinode-681000"
	I1205 11:37:22.559658    9414 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:37:22.559680    9414 fix.go:54] fixHost starting: 
	I1205 11:37:22.560542    9414 fix.go:112] recreateIfNeeded on multinode-681000: state=Stopped err=<nil>
	W1205 11:37:22.560567    9414 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:37:22.565118    9414 out.go:177] * Restarting existing qemu2 VM for "multinode-681000" ...
	I1205 11:37:22.569138    9414 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:37:22.569364    9414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:4f:49:0c:b3:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:37:22.580418    9414 main.go:141] libmachine: STDOUT: 
	I1205 11:37:22.580489    9414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:37:22.580571    9414 fix.go:56] duration metric: took 20.896042ms for fixHost
	I1205 11:37:22.580587    9414 start.go:83] releasing machines lock for "multinode-681000", held for 21.054459ms
	W1205 11:37:22.580750    9414 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:37:22.592455    9414 out.go:201] 
	W1205 11:37:22.597039    9414 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:37:22.597069    9414 out.go:270] * 
	* 
	W1205 11:37:22.599976    9414 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:37:22.608034    9414 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-681000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-681000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (36.728334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 node delete m03: exit status 83 (44.722167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-681000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-681000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr: exit status 7 (33.753208ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:37:22.810851    9428 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:37:22.811021    9428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:22.811025    9428 out.go:358] Setting ErrFile to fd 2...
	I1205 11:37:22.811027    9428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:22.811142    9428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:37:22.811257    9428 out.go:352] Setting JSON to false
	I1205 11:37:22.811267    9428 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:37:22.811323    9428 notify.go:220] Checking for updates...
	I1205 11:37:22.811473    9428 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:37:22.811480    9428 status.go:174] checking status of multinode-681000 ...
	I1205 11:37:22.811719    9428 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:37:22.811723    9428 status.go:384] host is not running, skipping remaining checks
	I1205 11:37:22.811724    9428 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.693125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-681000 stop: (2.136530125s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status: exit status 7 (68.668041ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr: exit status 7 (35.677833ms)

                                                
                                                
-- stdout --
	multinode-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:37:25.087000    9446 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:37:25.087194    9446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:25.087197    9446 out.go:358] Setting ErrFile to fd 2...
	I1205 11:37:25.087199    9446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:25.087319    9446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:37:25.087440    9446 out.go:352] Setting JSON to false
	I1205 11:37:25.087450    9446 mustload.go:65] Loading cluster: multinode-681000
	I1205 11:37:25.087503    9446 notify.go:220] Checking for updates...
	I1205 11:37:25.087658    9446 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:37:25.087666    9446 status.go:174] checking status of multinode-681000 ...
	I1205 11:37:25.087949    9446 status.go:371] multinode-681000 host status = "Stopped" (err=<nil>)
	I1205 11:37:25.087953    9446 status.go:384] host is not running, skipping remaining checks
	I1205 11:37:25.087955    9446 status.go:176] multinode-681000 status: &{Name:multinode-681000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr": multinode-681000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-681000 status --alsologtostderr": multinode-681000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.5855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-681000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-681000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.191809625s)

                                                
                                                
-- stdout --
	* [multinode-681000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-681000" primary control-plane node in "multinode-681000" cluster
	* Restarting existing qemu2 VM for "multinode-681000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-681000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:37:25.155296    9450 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:37:25.155450    9450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:25.155454    9450 out.go:358] Setting ErrFile to fd 2...
	I1205 11:37:25.155456    9450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:25.155569    9450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:37:25.156626    9450 out.go:352] Setting JSON to false
	I1205 11:37:25.174167    9450 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5814,"bootTime":1733421631,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:37:25.174259    9450 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:37:25.179015    9450 out.go:177] * [multinode-681000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:37:25.186029    9450 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:37:25.186079    9450 notify.go:220] Checking for updates...
	I1205 11:37:25.193010    9450 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:37:25.195994    9450 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:37:25.198997    9450 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:37:25.202083    9450 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:37:25.205051    9450 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:37:25.208281    9450 config.go:182] Loaded profile config "multinode-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:37:25.208568    9450 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:37:25.212952    9450 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:37:25.219936    9450 start.go:297] selected driver: qemu2
	I1205 11:37:25.219942    9450 start.go:901] validating driver "qemu2" against &{Name:multinode-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:37:25.219994    9450 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:37:25.222477    9450 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:37:25.222500    9450 cni.go:84] Creating CNI manager for ""
	I1205 11:37:25.222522    9450 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 11:37:25.222569    9450 start.go:340] cluster config:
	{Name:multinode-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-681000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:37:25.227142    9450 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:25.233790    9450 out.go:177] * Starting "multinode-681000" primary control-plane node in "multinode-681000" cluster
	I1205 11:37:25.237999    9450 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:37:25.238017    9450 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:37:25.238026    9450 cache.go:56] Caching tarball of preloaded images
	I1205 11:37:25.238086    9450 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:37:25.238093    9450 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:37:25.238149    9450 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/multinode-681000/config.json ...
	I1205 11:37:25.238566    9450 start.go:360] acquireMachinesLock for multinode-681000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:37:25.238600    9450 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "multinode-681000"
	I1205 11:37:25.238608    9450 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:37:25.238615    9450 fix.go:54] fixHost starting: 
	I1205 11:37:25.238746    9450 fix.go:112] recreateIfNeeded on multinode-681000: state=Stopped err=<nil>
	W1205 11:37:25.238754    9450 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:37:25.246981    9450 out.go:177] * Restarting existing qemu2 VM for "multinode-681000" ...
	I1205 11:37:25.250935    9450 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:37:25.250979    9450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:4f:49:0c:b3:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:37:25.253269    9450 main.go:141] libmachine: STDOUT: 
	I1205 11:37:25.253289    9450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:37:25.253320    9450 fix.go:56] duration metric: took 14.70625ms for fixHost
	I1205 11:37:25.253324    9450 start.go:83] releasing machines lock for "multinode-681000", held for 14.720042ms
	W1205 11:37:25.253330    9450 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:37:25.253383    9450 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:37:25.253388    9450 start.go:729] Will try again in 5 seconds ...
	I1205 11:37:30.254196    9450 start.go:360] acquireMachinesLock for multinode-681000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:37:30.254719    9450 start.go:364] duration metric: took 420.25µs to acquireMachinesLock for "multinode-681000"
	I1205 11:37:30.254873    9450 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:37:30.254896    9450 fix.go:54] fixHost starting: 
	I1205 11:37:30.255660    9450 fix.go:112] recreateIfNeeded on multinode-681000: state=Stopped err=<nil>
	W1205 11:37:30.255693    9450 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:37:30.264212    9450 out.go:177] * Restarting existing qemu2 VM for "multinode-681000" ...
	I1205 11:37:30.268203    9450 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:37:30.268468    9450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:4f:49:0c:b3:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/multinode-681000/disk.qcow2
	I1205 11:37:30.279487    9450 main.go:141] libmachine: STDOUT: 
	I1205 11:37:30.279552    9450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:37:30.279634    9450 fix.go:56] duration metric: took 24.741083ms for fixHost
	I1205 11:37:30.279652    9450 start.go:83] releasing machines lock for "multinode-681000", held for 24.909083ms
	W1205 11:37:30.279857    9450 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:37:30.287170    9450 out.go:201] 
	W1205 11:37:30.291274    9450 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:37:30.291301    9450 out.go:270] * 
	* 
	W1205 11:37:30.293904    9450 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:37:30.302012    9450 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-681000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (73.043625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-681000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-681000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-681000-m01 --driver=qemu2 : exit status 80 (9.940394958s)

                                                
                                                
-- stdout --
	* [multinode-681000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-681000-m01" primary control-plane node in "multinode-681000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-681000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-681000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-681000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-681000-m02 --driver=qemu2 : exit status 80 (9.873303541s)

                                                
                                                
-- stdout --
	* [multinode-681000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-681000-m02" primary control-plane node in "multinode-681000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-681000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-681000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-681000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-681000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-681000: exit status 83 (88.110792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-681000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-681000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-681000 -n multinode-681000: exit status 7 (34.765458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                    
x
+
TestPreload (10.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-990000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-990000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.976897834s)

                                                
                                                
-- stdout --
	* [test-preload-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-990000" primary control-plane node in "test-preload-990000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-990000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:37:50.589264    9505 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:37:50.589448    9505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:50.589454    9505 out.go:358] Setting ErrFile to fd 2...
	I1205 11:37:50.589456    9505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:37:50.589577    9505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:37:50.590684    9505 out.go:352] Setting JSON to false
	I1205 11:37:50.608742    9505 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5839,"bootTime":1733421631,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:37:50.608814    9505 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:37:50.614088    9505 out.go:177] * [test-preload-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:37:50.623040    9505 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:37:50.623119    9505 notify.go:220] Checking for updates...
	I1205 11:37:50.628964    9505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:37:50.632047    9505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:37:50.633325    9505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:37:50.636034    9505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:37:50.639012    9505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:37:50.642467    9505 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:37:50.642526    9505 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:37:50.646942    9505 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:37:50.653987    9505 start.go:297] selected driver: qemu2
	I1205 11:37:50.653993    9505 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:37:50.653999    9505 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:37:50.656522    9505 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:37:50.660009    9505 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:37:50.663115    9505 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:37:50.663142    9505 cni.go:84] Creating CNI manager for ""
	I1205 11:37:50.663164    9505 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:37:50.663168    9505 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:37:50.663201    9505 start.go:340] cluster config:
	{Name:test-preload-990000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:37:50.667796    9505 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.674957    9505 out.go:177] * Starting "test-preload-990000" primary control-plane node in "test-preload-990000" cluster
	I1205 11:37:50.679022    9505 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1205 11:37:50.679108    9505 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/test-preload-990000/config.json ...
	I1205 11:37:50.679128    9505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/test-preload-990000/config.json: {Name:mk45ac778c229705dd23de238922f831ae3ccf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:37:50.679129    9505 cache.go:107] acquiring lock: {Name:mkeab60dc1e760c68d37c860302c92ad2f9a4d8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.679153    9505 cache.go:107] acquiring lock: {Name:mka87e4cc91a15c45289ae001f292b34432981ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.679197    9505 cache.go:107] acquiring lock: {Name:mkcaf32fbf1ed0024fa80614e03ff7c6e45a13d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.679130    9505 cache.go:107] acquiring lock: {Name:mkf2f9504745b78223a295d4db642411c341d99c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.679341    9505 cache.go:107] acquiring lock: {Name:mk3da0b883880fc57b2c81562cc1a81b00360922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.679345    9505 cache.go:107] acquiring lock: {Name:mkff1433b9b771fd3cae3ba705c493db077029af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.679402    9505 cache.go:107] acquiring lock: {Name:mkc118ee64e5040a7038d53a372abe7c2145c789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.679533    9505 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 11:37:50.679669    9505 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 11:37:50.679675    9505 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 11:37:50.679835    9505 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 11:37:50.679345    9505 cache.go:107] acquiring lock: {Name:mkbf877cbbe2a3d9c6c00436d75e4365a1242f39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:37:50.679836    9505 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 11:37:50.679907    9505 start.go:360] acquireMachinesLock for test-preload-990000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:37:50.679852    9505 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:37:50.679989    9505 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:37:50.679991    9505 start.go:364] duration metric: took 70.542µs to acquireMachinesLock for "test-preload-990000"
	I1205 11:37:50.680004    9505 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:37:50.680004    9505 start.go:93] Provisioning new machine with config: &{Name:test-preload-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:37:50.680047    9505 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:37:50.685408    9505 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:37:50.689652    9505 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 11:37:50.689739    9505 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 11:37:50.689776    9505 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 11:37:50.689802    9505 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:37:50.689835    9505 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:37:50.690114    9505 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 11:37:50.690320    9505 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:37:50.690378    9505 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 11:37:50.703728    9505 start.go:159] libmachine.API.Create for "test-preload-990000" (driver="qemu2")
	I1205 11:37:50.703751    9505 client.go:168] LocalClient.Create starting
	I1205 11:37:50.703836    9505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:37:50.703875    9505 main.go:141] libmachine: Decoding PEM data...
	I1205 11:37:50.703885    9505 main.go:141] libmachine: Parsing certificate...
	I1205 11:37:50.703938    9505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:37:50.703972    9505 main.go:141] libmachine: Decoding PEM data...
	I1205 11:37:50.703981    9505 main.go:141] libmachine: Parsing certificate...
	I1205 11:37:50.704397    9505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:37:50.871710    9505 main.go:141] libmachine: Creating SSH key...
	I1205 11:37:51.068612    9505 main.go:141] libmachine: Creating Disk image...
	I1205 11:37:51.068631    9505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:37:51.068932    9505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2
	I1205 11:37:51.079542    9505 main.go:141] libmachine: STDOUT: 
	I1205 11:37:51.079558    9505 main.go:141] libmachine: STDERR: 
	I1205 11:37:51.079624    9505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2 +20000M
	I1205 11:37:51.088723    9505 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:37:51.088742    9505 main.go:141] libmachine: STDERR: 
	I1205 11:37:51.088757    9505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2
	I1205 11:37:51.088762    9505 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:37:51.088774    9505 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:37:51.088803    9505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:01:f3:0f:42:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2
	I1205 11:37:51.090716    9505 main.go:141] libmachine: STDOUT: 
	I1205 11:37:51.090727    9505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:37:51.090743    9505 client.go:171] duration metric: took 386.992583ms to LocalClient.Create
	I1205 11:37:51.158057    9505 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1205 11:37:51.178599    9505 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1205 11:37:51.197275    9505 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1205 11:37:51.322842    9505 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1205 11:37:51.322869    9505 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 643.695792ms
	I1205 11:37:51.322878    9505 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1205 11:37:51.329034    9505 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1205 11:37:51.415741    9505 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W1205 11:37:51.467151    9505 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1205 11:37:51.467191    9505 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 11:37:51.532702    9505 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W1205 11:37:51.876118    9505 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 11:37:51.876230    9505 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 11:37:52.329140    9505 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 11:37:52.329181    9505 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.650065042s
	I1205 11:37:52.329204    9505 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 11:37:53.090991    9505 start.go:128] duration metric: took 2.410943667s to createHost
	I1205 11:37:53.091048    9505 start.go:83] releasing machines lock for "test-preload-990000", held for 2.411069125s
	W1205 11:37:53.091168    9505 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:37:53.104731    9505 out.go:177] * Deleting "test-preload-990000" in qemu2 ...
	W1205 11:37:53.131148    9505 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:37:53.131191    9505 start.go:729] Will try again in 5 seconds ...
	I1205 11:37:53.659546    9505 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1205 11:37:53.659588    9505 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.980275042s
	I1205 11:37:53.659616    9505 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1205 11:37:53.764405    9505 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1205 11:37:53.764444    9505 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.085148459s
	I1205 11:37:53.764468    9505 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1205 11:37:55.585367    9505 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1205 11:37:55.585412    9505 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.906297083s
	I1205 11:37:55.585437    9505 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1205 11:37:56.057150    9505 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1205 11:37:56.057197    9505 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.378115542s
	I1205 11:37:56.057221    9505 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1205 11:37:57.369609    9505 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1205 11:37:57.369660    9505 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.690315458s
	I1205 11:37:57.369687    9505 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1205 11:37:58.131334    9505 start.go:360] acquireMachinesLock for test-preload-990000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:37:58.131796    9505 start.go:364] duration metric: took 385.25µs to acquireMachinesLock for "test-preload-990000"
	I1205 11:37:58.131933    9505 start.go:93] Provisioning new machine with config: &{Name:test-preload-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:37:58.132200    9505 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:37:58.147049    9505 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:37:58.198237    9505 start.go:159] libmachine.API.Create for "test-preload-990000" (driver="qemu2")
	I1205 11:37:58.198283    9505 client.go:168] LocalClient.Create starting
	I1205 11:37:58.198422    9505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:37:58.198501    9505 main.go:141] libmachine: Decoding PEM data...
	I1205 11:37:58.198518    9505 main.go:141] libmachine: Parsing certificate...
	I1205 11:37:58.198586    9505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:37:58.198642    9505 main.go:141] libmachine: Decoding PEM data...
	I1205 11:37:58.198657    9505 main.go:141] libmachine: Parsing certificate...
	I1205 11:37:58.199194    9505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:37:58.375996    9505 main.go:141] libmachine: Creating SSH key...
	I1205 11:37:58.463867    9505 main.go:141] libmachine: Creating Disk image...
	I1205 11:37:58.463873    9505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:37:58.464073    9505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2
	I1205 11:37:58.474509    9505 main.go:141] libmachine: STDOUT: 
	I1205 11:37:58.474531    9505 main.go:141] libmachine: STDERR: 
	I1205 11:37:58.474605    9505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2 +20000M
	I1205 11:37:58.483364    9505 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:37:58.483381    9505 main.go:141] libmachine: STDERR: 
	I1205 11:37:58.483392    9505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2
	I1205 11:37:58.483398    9505 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:37:58.483406    9505 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:37:58.483455    9505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:38:31:1b:80:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/test-preload-990000/disk.qcow2
	I1205 11:37:58.485418    9505 main.go:141] libmachine: STDOUT: 
	I1205 11:37:58.485433    9505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:37:58.485451    9505 client.go:171] duration metric: took 287.163083ms to LocalClient.Create
	I1205 11:38:00.486337    9505 start.go:128] duration metric: took 2.35410725s to createHost
	I1205 11:38:00.486403    9505 start.go:83] releasing machines lock for "test-preload-990000", held for 2.354603583s
	W1205 11:38:00.486666    9505 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:00.500229    9505 out.go:201] 
	W1205 11:38:00.504325    9505 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:38:00.504346    9505 out.go:270] * 
	* 
	W1205 11:38:00.506424    9505 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:38:00.517182    9505 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-990000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-05 11:38:00.535064 -0800 PST m=+640.508232418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-990000 -n test-preload-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-990000 -n test-preload-990000: exit status 7 (75.382917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-990000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-990000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-990000
--- FAIL: TestPreload (10.14s)

                                                
                                    
x
+
TestScheduledStopUnix (10.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-580000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-580000 --memory=2048 --driver=qemu2 : exit status 80 (9.969538292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-580000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-580000" primary control-plane node in "scheduled-stop-580000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-580000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-580000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-580000" primary control-plane node in "scheduled-stop-580000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-580000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-05 11:38:10.66579 -0800 PST m=+650.639049460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-580000 -n scheduled-stop-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-580000 -n scheduled-stop-580000: exit status 7 (73.551875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-580000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-580000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-580000
--- FAIL: TestScheduledStopUnix (10.12s)

                                                
                                    
x
+
TestSkaffold (12.61s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3031419086 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3031419086 version: (1.016752208s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-224000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-224000 --memory=2600 --driver=qemu2 : exit status 80 (9.809697292s)

                                                
                                                
-- stdout --
	* [skaffold-224000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-224000" primary control-plane node in "skaffold-224000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-224000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-224000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-224000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-224000" primary control-plane node in "skaffold-224000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-224000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-224000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-05 11:38:23.276027 -0800 PST m=+663.249401001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-224000 -n skaffold-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-224000 -n skaffold-224000: exit status 7 (71.63525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-224000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-224000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-224000
--- FAIL: TestSkaffold (12.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (640.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3622558755 start -p running-upgrade-842000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3622558755 start -p running-upgrade-842000 --memory=2200 --vm-driver=qemu2 : (1m3.458010791s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-842000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-842000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9m2.634421958s)

                                                
                                                
-- stdout --
	* [running-upgrade-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-842000" primary control-plane node in "running-upgrade-842000" cluster
	* Updating the running qemu2 "running-upgrade-842000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:39:49.806745    9824 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:39:49.806914    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:39:49.806918    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:39:49.806920    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:39:49.807057    9824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:39:49.808167    9824 out.go:352] Setting JSON to false
	I1205 11:39:49.826430    9824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5958,"bootTime":1733421631,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:39:49.826520    9824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:39:49.831721    9824 out.go:177] * [running-upgrade-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:39:49.838722    9824 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:39:49.838749    9824 notify.go:220] Checking for updates...
	I1205 11:39:49.845540    9824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:39:49.849662    9824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:39:49.852722    9824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:39:49.855670    9824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:39:49.858710    9824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:39:49.861996    9824 config.go:182] Loaded profile config "running-upgrade-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:39:49.863535    9824 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 11:39:49.866668    9824 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:39:49.870719    9824 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:39:49.875690    9824 start.go:297] selected driver: qemu2
	I1205 11:39:49.875696    9824 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56581 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:39:49.875737    9824 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:39:49.878143    9824 cni.go:84] Creating CNI manager for ""
	I1205 11:39:49.878230    9824 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:39:49.878268    9824 start.go:340] cluster config:
	{Name:running-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56581 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:39:49.878314    9824 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:39:49.885684    9824 out.go:177] * Starting "running-upgrade-842000" primary control-plane node in "running-upgrade-842000" cluster
	I1205 11:39:49.889656    9824 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:39:49.889669    9824 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1205 11:39:49.889676    9824 cache.go:56] Caching tarball of preloaded images
	I1205 11:39:49.889738    9824 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:39:49.889743    9824 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1205 11:39:49.889787    9824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/config.json ...
	I1205 11:39:49.890087    9824 start.go:360] acquireMachinesLock for running-upgrade-842000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:40:02.773725    9824 start.go:364] duration metric: took 12.883741208s to acquireMachinesLock for "running-upgrade-842000"
	I1205 11:40:02.773749    9824 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:40:02.773757    9824 fix.go:54] fixHost starting: 
	I1205 11:40:02.774493    9824 fix.go:112] recreateIfNeeded on running-upgrade-842000: state=Running err=<nil>
	W1205 11:40:02.774507    9824 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:40:02.782662    9824 out.go:177] * Updating the running qemu2 "running-upgrade-842000" VM ...
	I1205 11:40:02.786511    9824 machine.go:93] provisionDockerMachine start ...
	I1205 11:40:02.786625    9824 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:02.786775    9824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013fefc0] 0x101401800 <nil>  [] 0s} localhost 56489 <nil> <nil>}
	I1205 11:40:02.786780    9824 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 11:40:02.834546    9824 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-842000
	
	I1205 11:40:02.834562    9824 buildroot.go:166] provisioning hostname "running-upgrade-842000"
	I1205 11:40:02.834632    9824 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:02.834753    9824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013fefc0] 0x101401800 <nil>  [] 0s} localhost 56489 <nil> <nil>}
	I1205 11:40:02.834760    9824 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-842000 && echo "running-upgrade-842000" | sudo tee /etc/hostname
	I1205 11:40:02.890936    9824 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-842000
	
	I1205 11:40:02.891006    9824 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:02.891123    9824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013fefc0] 0x101401800 <nil>  [] 0s} localhost 56489 <nil> <nil>}
	I1205 11:40:02.891131    9824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-842000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-842000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-842000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 11:40:02.940537    9824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 11:40:02.940549    9824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20053-7409/.minikube CaCertPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20053-7409/.minikube}
	I1205 11:40:02.940559    9824 buildroot.go:174] setting up certificates
	I1205 11:40:02.940563    9824 provision.go:84] configureAuth start
	I1205 11:40:02.940569    9824 provision.go:143] copyHostCerts
	I1205 11:40:02.940628    9824 exec_runner.go:144] found /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.pem, removing ...
	I1205 11:40:02.940636    9824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.pem
	I1205 11:40:02.940753    9824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.pem (1078 bytes)
	I1205 11:40:02.940935    9824 exec_runner.go:144] found /Users/jenkins/minikube-integration/20053-7409/.minikube/cert.pem, removing ...
	I1205 11:40:02.940939    9824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20053-7409/.minikube/cert.pem
	I1205 11:40:02.940989    9824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20053-7409/.minikube/cert.pem (1123 bytes)
	I1205 11:40:02.941091    9824 exec_runner.go:144] found /Users/jenkins/minikube-integration/20053-7409/.minikube/key.pem, removing ...
	I1205 11:40:02.941095    9824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20053-7409/.minikube/key.pem
	I1205 11:40:02.941136    9824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20053-7409/.minikube/key.pem (1679 bytes)
	I1205 11:40:02.941246    9824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-842000 san=[127.0.0.1 localhost minikube running-upgrade-842000]
	I1205 11:40:03.007439    9824 provision.go:177] copyRemoteCerts
	I1205 11:40:03.007502    9824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 11:40:03.007515    9824 sshutil.go:53] new ssh client: &{IP:localhost Port:56489 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/running-upgrade-842000/id_rsa Username:docker}
	I1205 11:40:03.036937    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 11:40:03.046661    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 11:40:03.058070    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 11:40:03.067489    9824 provision.go:87] duration metric: took 126.913833ms to configureAuth
	I1205 11:40:03.067502    9824 buildroot.go:189] setting minikube options for container-runtime
	I1205 11:40:03.067622    9824 config.go:182] Loaded profile config "running-upgrade-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:40:03.067678    9824 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:03.067771    9824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013fefc0] 0x101401800 <nil>  [] 0s} localhost 56489 <nil> <nil>}
	I1205 11:40:03.067777    9824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 11:40:03.138199    9824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1205 11:40:03.138212    9824 buildroot.go:70] root file system type: tmpfs
	I1205 11:40:03.138272    9824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 11:40:03.138335    9824 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:03.138453    9824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013fefc0] 0x101401800 <nil>  [] 0s} localhost 56489 <nil> <nil>}
	I1205 11:40:03.138490    9824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 11:40:03.211069    9824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 11:40:03.211136    9824 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:03.211261    9824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013fefc0] 0x101401800 <nil>  [] 0s} localhost 56489 <nil> <nil>}
	I1205 11:40:03.211271    9824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 11:40:03.270242    9824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 11:40:03.270254    9824 machine.go:96] duration metric: took 483.73625ms to provisionDockerMachine
	I1205 11:40:03.270261    9824 start.go:293] postStartSetup for "running-upgrade-842000" (driver="qemu2")
	I1205 11:40:03.270268    9824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 11:40:03.270342    9824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 11:40:03.270360    9824 sshutil.go:53] new ssh client: &{IP:localhost Port:56489 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/running-upgrade-842000/id_rsa Username:docker}
	I1205 11:40:03.301158    9824 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 11:40:03.302850    9824 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 11:40:03.302858    9824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20053-7409/.minikube/addons for local assets ...
	I1205 11:40:03.302946    9824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20053-7409/.minikube/files for local assets ...
	I1205 11:40:03.303034    9824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/ssl/certs/79222.pem -> 79222.pem in /etc/ssl/certs
	I1205 11:40:03.303137    9824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 11:40:03.306415    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/ssl/certs/79222.pem --> /etc/ssl/certs/79222.pem (1708 bytes)
	I1205 11:40:03.314009    9824 start.go:296] duration metric: took 43.742208ms for postStartSetup
	I1205 11:40:03.314025    9824 fix.go:56] duration metric: took 540.277375ms for fixHost
	I1205 11:40:03.314089    9824 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:03.314207    9824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013fefc0] 0x101401800 <nil>  [] 0s} localhost 56489 <nil> <nil>}
	I1205 11:40:03.314213    9824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 11:40:03.367072    9824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733427603.281569611
	
	I1205 11:40:03.367085    9824 fix.go:216] guest clock: 1733427603.281569611
	I1205 11:40:03.367089    9824 fix.go:229] Guest: 2024-12-05 11:40:03.281569611 -0800 PST Remote: 2024-12-05 11:40:03.314027 -0800 PST m=+13.532089126 (delta=-32.457389ms)
	I1205 11:40:03.367100    9824 fix.go:200] guest clock delta is within tolerance: -32.457389ms
	I1205 11:40:03.367103    9824 start.go:83] releasing machines lock for "running-upgrade-842000", held for 593.369834ms
	I1205 11:40:03.367182    9824 ssh_runner.go:195] Run: cat /version.json
	I1205 11:40:03.367191    9824 sshutil.go:53] new ssh client: &{IP:localhost Port:56489 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/running-upgrade-842000/id_rsa Username:docker}
	I1205 11:40:03.367279    9824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 11:40:03.367320    9824 sshutil.go:53] new ssh client: &{IP:localhost Port:56489 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/running-upgrade-842000/id_rsa Username:docker}
	W1205 11:40:03.395785    9824 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 11:40:03.395852    9824 ssh_runner.go:195] Run: systemctl --version
	I1205 11:40:03.398184    9824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 11:40:03.401800    9824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 11:40:03.401857    9824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1205 11:40:03.406431    9824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1205 11:40:03.412175    9824 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 11:40:03.412185    9824 start.go:495] detecting cgroup driver to use...
	I1205 11:40:03.412275    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:40:03.424493    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1205 11:40:03.430410    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 11:40:03.436421    9824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 11:40:03.436506    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 11:40:03.442260    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:40:03.450823    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 11:40:03.457783    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:40:03.461163    9824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 11:40:03.464817    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 11:40:03.468794    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 11:40:03.481366    9824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 11:40:03.485778    9824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 11:40:03.489091    9824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 11:40:03.492181    9824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:03.597281    9824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 11:40:03.617639    9824 start.go:495] detecting cgroup driver to use...
	I1205 11:40:03.617713    9824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 11:40:03.626004    9824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:40:03.631159    9824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 11:40:03.651805    9824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:40:03.662996    9824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 11:40:03.672464    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:40:03.679278    9824 ssh_runner.go:195] Run: which cri-dockerd
	I1205 11:40:03.680731    9824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 11:40:03.683671    9824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1205 11:40:03.692637    9824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 11:40:03.820599    9824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 11:40:03.933396    9824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 11:40:03.933455    9824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 11:40:03.939099    9824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:04.062280    9824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:40:25.507543    9824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (21.445438583s)
	I1205 11:40:25.507632    9824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 11:40:25.512051    9824 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 11:40:25.519573    9824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:40:25.524508    9824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 11:40:25.614473    9824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 11:40:25.693784    9824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:25.774148    9824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 11:40:25.779801    9824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:40:25.784869    9824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:25.874658    9824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 11:40:25.915880    9824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 11:40:25.915969    9824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 11:40:25.917948    9824 start.go:563] Will wait 60s for crictl version
	I1205 11:40:25.918007    9824 ssh_runner.go:195] Run: which crictl
	I1205 11:40:25.920249    9824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 11:40:25.932492    9824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1205 11:40:25.932580    9824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:40:25.945416    9824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:40:25.961660    9824 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1205 11:40:25.961812    9824 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1205 11:40:25.963404    9824 kubeadm.go:883] updating cluster {Name:running-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56581 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1205 11:40:25.963448    9824 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:40:25.963492    9824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:40:25.973563    9824 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:40:25.973571    9824 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:40:25.973622    9824 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:40:25.976608    9824 ssh_runner.go:195] Run: which lz4
	I1205 11:40:25.978028    9824 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 11:40:25.979211    9824 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 11:40:25.979222    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1205 11:40:26.939701    9824 docker.go:653] duration metric: took 961.720625ms to copy over tarball
	I1205 11:40:26.939777    9824 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 11:40:28.216894    9824 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.277114s)
	I1205 11:40:28.216909    9824 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 11:40:28.233249    9824 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:40:28.236588    9824 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1205 11:40:28.241726    9824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:28.326361    9824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:40:29.524398    9824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.198030625s)
	I1205 11:40:29.524514    9824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:40:29.540639    9824 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:40:29.540650    9824 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:40:29.540656    9824 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 11:40:29.544289    9824 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:29.546235    9824 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:29.548697    9824 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:29.548868    9824 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:29.551176    9824 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:29.551205    9824 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:29.552520    9824 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:29.552718    9824 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:29.553690    9824 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:29.553800    9824 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:29.555329    9824 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:29.555435    9824 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:29.555811    9824 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 11:40:29.557218    9824 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:29.557315    9824 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:29.557791    9824 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 11:40:30.088707    9824 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:30.102168    9824 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1205 11:40:30.102211    9824 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:30.102262    9824 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:30.114219    9824 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1205 11:40:30.140245    9824 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:30.152603    9824 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1205 11:40:30.152627    9824 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:30.152701    9824 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:30.163205    9824 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1205 11:40:30.197316    9824 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:30.208067    9824 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:30.209318    9824 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1205 11:40:30.209339    9824 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:30.209599    9824 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:30.222094    9824 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1205 11:40:30.222118    9824 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:30.222193    9824 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:30.228484    9824 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1205 11:40:30.235864    9824 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1205 11:40:30.316557    9824 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:30.329089    9824 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1205 11:40:30.329118    9824 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:30.329183    9824 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:30.340565    9824 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1205 11:40:30.370471    9824 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1205 11:40:30.370633    9824 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:30.380893    9824 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1205 11:40:30.380920    9824 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:30.380983    9824 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:30.394686    9824 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 11:40:30.394810    9824 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:40:30.396473    9824 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1205 11:40:30.396485    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1205 11:40:30.446573    9824 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:40:30.446589    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1205 11:40:30.448903    9824 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W1205 11:40:30.472714    9824 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 11:40:30.472947    9824 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:30.505956    9824 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1205 11:40:30.506043    9824 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1205 11:40:30.506070    9824 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1205 11:40:30.506070    9824 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 11:40:30.506086    9824 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:30.506139    9824 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:30.506139    9824 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1205 11:40:30.521267    9824 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1205 11:40:30.521402    9824 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1205 11:40:30.523211    9824 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1205 11:40:30.523234    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1205 11:40:30.532021    9824 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1205 11:40:30.532033    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1205 11:40:30.560708    9824 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1205 11:40:30.560750    9824 cache_images.go:92] duration metric: took 1.020096292s to LoadCachedImages
	W1205 11:40:30.560824    9824 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1205 11:40:30.560831    9824 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1205 11:40:30.560896    9824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-842000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 11:40:30.560974    9824 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 11:40:30.575956    9824 cni.go:84] Creating CNI manager for ""
	I1205 11:40:30.575972    9824 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:40:30.575982    9824 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 11:40:30.575991    9824 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-842000 NodeName:running-upgrade-842000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 11:40:30.576062    9824 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-842000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 11:40:30.576131    9824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1205 11:40:30.579974    9824 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 11:40:30.580053    9824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 11:40:30.582978    9824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1205 11:40:30.588658    9824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 11:40:30.594154    9824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1205 11:40:30.600337    9824 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1205 11:40:30.601943    9824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:30.698333    9824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:40:30.704993    9824 certs.go:68] Setting up /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000 for IP: 10.0.2.15
	I1205 11:40:30.705003    9824 certs.go:194] generating shared ca certs ...
	I1205 11:40:30.705012    9824 certs.go:226] acquiring lock for ca certs: {Name:mk649b36c637f895ef0e3cb84362644c97069221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:40:30.705151    9824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.key
	I1205 11:40:30.705345    9824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/proxy-client-ca.key
	I1205 11:40:30.705353    9824 certs.go:256] generating profile certs ...
	I1205 11:40:30.705555    9824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/client.key
	I1205 11:40:30.705578    9824 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.key.4d09cf80
	I1205 11:40:30.705595    9824 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.crt.4d09cf80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1205 11:40:31.027113    9824 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.crt.4d09cf80 ...
	I1205 11:40:31.027128    9824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.crt.4d09cf80: {Name:mkd6ee86ebd2c6a4f76a2bdf556f3dea8117c9ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:40:31.027502    9824 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.key.4d09cf80 ...
	I1205 11:40:31.027515    9824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.key.4d09cf80: {Name:mk8a64ab13f41dbbcc1dbc9bd7315dc753ac147f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:40:31.027693    9824 certs.go:381] copying /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.crt.4d09cf80 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.crt
	I1205 11:40:31.027829    9824 certs.go:385] copying /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.key.4d09cf80 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.key
	I1205 11:40:31.028192    9824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/proxy-client.key
	I1205 11:40:31.028357    9824 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/7922.pem (1338 bytes)
	W1205 11:40:31.028517    9824 certs.go:480] ignoring /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/7922_empty.pem, impossibly tiny 0 bytes
	I1205 11:40:31.028523    9824 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 11:40:31.028596    9824 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem (1078 bytes)
	I1205 11:40:31.028676    9824 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem (1123 bytes)
	I1205 11:40:31.028800    9824 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/key.pem (1679 bytes)
	I1205 11:40:31.028956    9824 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/ssl/certs/79222.pem (1708 bytes)
	I1205 11:40:31.029797    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 11:40:31.039072    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 11:40:31.047251    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 11:40:31.054544    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 11:40:31.061770    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 11:40:31.068541    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 11:40:31.075108    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 11:40:31.082608    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 11:40:31.089943    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/ssl/certs/79222.pem --> /usr/share/ca-certificates/79222.pem (1708 bytes)
	I1205 11:40:31.097301    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 11:40:31.104380    9824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/7922.pem --> /usr/share/ca-certificates/7922.pem (1338 bytes)
	I1205 11:40:31.111361    9824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 11:40:31.116620    9824 ssh_runner.go:195] Run: openssl version
	I1205 11:40:31.118463    9824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79222.pem && ln -fs /usr/share/ca-certificates/79222.pem /etc/ssl/certs/79222.pem"
	I1205 11:40:31.121719    9824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79222.pem
	I1205 11:40:31.123166    9824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:28 /usr/share/ca-certificates/79222.pem
	I1205 11:40:31.123195    9824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79222.pem
	I1205 11:40:31.124943    9824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/79222.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 11:40:31.127647    9824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 11:40:31.130667    9824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:40:31.132279    9824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:40:31.132308    9824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:40:31.134122    9824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 11:40:31.137606    9824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7922.pem && ln -fs /usr/share/ca-certificates/7922.pem /etc/ssl/certs/7922.pem"
	I1205 11:40:31.141310    9824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7922.pem
	I1205 11:40:31.142751    9824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:28 /usr/share/ca-certificates/7922.pem
	I1205 11:40:31.142776    9824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7922.pem
	I1205 11:40:31.144691    9824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7922.pem /etc/ssl/certs/51391683.0"
	I1205 11:40:31.147448    9824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 11:40:31.148967    9824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 11:40:31.150903    9824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 11:40:31.152764    9824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 11:40:31.154708    9824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 11:40:31.156999    9824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 11:40:31.158864    9824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 11:40:31.160687    9824 kubeadm.go:392] StartCluster: {Name:running-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56581 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:40:31.160763    9824 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:40:31.171274    9824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 11:40:31.174575    9824 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 11:40:31.174585    9824 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 11:40:31.174618    9824 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 11:40:31.178427    9824 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:40:31.178852    9824 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-842000" does not appear in /Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:40:31.178967    9824 kubeconfig.go:62] /Users/jenkins/minikube-integration/20053-7409/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-842000" cluster setting kubeconfig missing "running-upgrade-842000" context setting]
	I1205 11:40:31.179154    9824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/kubeconfig: {Name:mk997d47fa87fe6dec2166788b387274f153b2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:40:31.179621    9824 kapi.go:59] client config for running-upgrade-842000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/client.key", CAFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102e5b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:40:31.180573    9824 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 11:40:31.183471    9824 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-842000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1205 11:40:31.183476    9824 kubeadm.go:1160] stopping kube-system containers ...
	I1205 11:40:31.183520    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:40:31.195170    9824 docker.go:483] Stopping containers: [33eef3baf5d2 53997cbaa461 73938e798159 be4390827229 963ce4446a34 34c7cd6edc31 84b34aaafbea 2e48bb217d7a ec0552ab0990 2ff36a3bbfe5 811be565f79e ede713ea0239 08945b992bd0 94c71d08b54c 02ddf96cec5c 7983f41da13d 8388ea218d97 e2e5d052a2f1 235ae1c8e16e 3a40d081b24c 0d31abb84ed4]
	I1205 11:40:31.195243    9824 ssh_runner.go:195] Run: docker stop 33eef3baf5d2 53997cbaa461 73938e798159 be4390827229 963ce4446a34 34c7cd6edc31 84b34aaafbea 2e48bb217d7a ec0552ab0990 2ff36a3bbfe5 811be565f79e ede713ea0239 08945b992bd0 94c71d08b54c 02ddf96cec5c 7983f41da13d 8388ea218d97 e2e5d052a2f1 235ae1c8e16e 3a40d081b24c 0d31abb84ed4
	I1205 11:40:31.206883    9824 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 11:40:31.285769    9824 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:40:31.290016    9824 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Dec  5 19:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Dec  5 19:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  5 19:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Dec  5 19:39 /etc/kubernetes/scheduler.conf
	
	I1205 11:40:31.290057    9824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/admin.conf
	I1205 11:40:31.293599    9824 kubeadm.go:163] "https://control-plane.minikube.internal:56581" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:40:31.293647    9824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:40:31.297117    9824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/kubelet.conf
	I1205 11:40:31.299843    9824 kubeadm.go:163] "https://control-plane.minikube.internal:56581" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:40:31.299874    9824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:40:31.302756    9824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/controller-manager.conf
	I1205 11:40:31.305813    9824 kubeadm.go:163] "https://control-plane.minikube.internal:56581" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:40:31.305850    9824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:40:31.309515    9824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/scheduler.conf
	I1205 11:40:31.312456    9824 kubeadm.go:163] "https://control-plane.minikube.internal:56581" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:40:31.312487    9824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:40:31.315096    9824 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:40:31.318277    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:31.356100    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:31.892884    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:32.148368    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:32.180177    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:32.205177    9824 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:40:32.205273    9824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:40:32.706974    9824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:40:33.207442    9824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:40:33.707354    9824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:40:34.207357    9824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:40:34.214121    9824 api_server.go:72] duration metric: took 2.0089625s to wait for apiserver process to appear ...
	I1205 11:40:34.214132    9824 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:40:34.214149    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:39.216170    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:39.216205    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:44.216551    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:44.216606    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:49.217098    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:49.217150    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:54.217750    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:54.217873    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:59.218944    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:59.218999    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:04.220448    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:04.220501    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:09.221397    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:09.221491    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:14.222928    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:14.222972    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:19.225123    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:19.225172    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:24.225484    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:24.225527    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:29.227831    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:29.227866    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:34.230101    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:34.230288    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:41:34.242310    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:41:34.242388    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:41:34.253184    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:41:34.253267    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:41:34.268533    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:41:34.268612    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:41:34.279206    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:41:34.279273    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:41:34.289751    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:41:34.289840    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:41:34.300229    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:41:34.300309    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:41:34.310708    9824 logs.go:282] 0 containers: []
	W1205 11:41:34.310720    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:41:34.310789    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:41:34.321424    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:41:34.321439    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:41:34.321445    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:41:34.335549    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:41:34.335559    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:41:34.352228    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:41:34.352239    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:41:34.364086    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:41:34.364097    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:41:34.380412    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:41:34.380423    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:41:34.391119    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:41:34.391213    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:41:34.424054    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:41:34.424061    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:41:34.529550    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:41:34.529561    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:41:34.549290    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:41:34.549301    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:41:34.566468    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:41:34.566484    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:41:34.580995    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:41:34.581005    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:41:34.592564    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:41:34.592574    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:41:34.604219    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:41:34.604234    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:41:34.630660    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:41:34.630670    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:41:34.634789    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:41:34.634795    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:41:34.653912    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:41:34.653922    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:41:34.695228    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:41:34.695238    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:41:34.709799    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:41:34.709809    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:41:34.721597    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:41:34.721609    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:41:34.737689    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:41:34.737699    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:41:34.749549    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:41:34.749567    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:41:34.749598    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:41:34.749604    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:41:34.749610    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:41:34.749613    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:41:34.749616    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:41:44.753625    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:49.755845    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:49.756368    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:41:49.795073    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:41:49.795229    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:41:49.814838    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:41:49.814932    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:41:49.829511    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:41:49.829607    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:41:49.842723    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:41:49.842799    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:41:49.853969    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:41:49.854042    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:41:49.864585    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:41:49.864668    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:41:49.875028    9824 logs.go:282] 0 containers: []
	W1205 11:41:49.875042    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:41:49.875114    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:41:49.885212    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:41:49.885225    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:41:49.885230    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:41:49.900309    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:41:49.900319    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:41:49.925476    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:41:49.925483    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:41:49.930022    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:41:49.930031    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:41:49.972772    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:41:49.972783    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:41:49.988526    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:41:49.988536    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:41:50.000531    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:41:50.000541    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:41:50.015006    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:41:50.015018    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:41:50.026934    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:41:50.026948    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:41:50.040559    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:41:50.040569    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:41:50.053856    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:41:50.053866    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:41:50.071647    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:41:50.071658    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:41:50.083742    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:41:50.083752    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:41:50.095318    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:41:50.095331    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:41:50.106266    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:41:50.106361    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:41:50.138344    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:41:50.138352    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:41:50.155132    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:41:50.155143    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:41:50.194995    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:41:50.195005    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:41:50.211013    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:41:50.211023    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:41:50.227230    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:41:50.227245    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:41:50.239257    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:41:50.239267    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:41:50.239293    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:41:50.239297    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:41:50.239303    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:41:50.239306    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:41:50.239317    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:42:00.243107    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:05.245432    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:05.245681    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:05.266927    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:42:05.267047    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:05.281293    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:42:05.281374    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:05.296640    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:42:05.296723    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:05.306796    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:42:05.306875    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:05.317666    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:42:05.317735    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:05.331669    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:42:05.331757    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:05.341720    9824 logs.go:282] 0 containers: []
	W1205 11:42:05.341732    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:05.341800    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:05.352184    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:42:05.352207    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:05.352214    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:05.357375    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:42:05.357381    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:42:05.374437    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:42:05.374449    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:42:05.386620    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:42:05.386633    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:42:05.397993    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:42:05.398005    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:42:05.437591    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:42:05.437604    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:42:05.455482    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:42:05.455493    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:42:05.470940    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:42:05.470950    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:42:05.484358    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:42:05.484373    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:42:05.498337    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:42:05.498348    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:42:05.510243    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:42:05.510256    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:42:05.526096    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:42:05.526107    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:42:05.537901    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:42:05.537910    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:42:05.548985    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:42:05.548996    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:05.563165    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:05.563175    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:42:05.571680    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:42:05.571779    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:42:05.603647    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:05.603655    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:05.647008    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:42:05.647019    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:42:05.661883    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:42:05.661893    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:42:05.673456    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:05.673466    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:05.699306    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:42:05.699315    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:42:05.699338    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:42:05.699342    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:42:05.699345    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:42:05.699349    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:42:05.699352    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:42:15.703501    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:20.704354    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:20.704570    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:20.722789    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:42:20.722904    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:20.736365    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:42:20.736458    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:20.748525    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:42:20.748607    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:20.759598    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:42:20.759674    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:20.770584    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:42:20.770656    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:20.780998    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:42:20.781090    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:20.791758    9824 logs.go:282] 0 containers: []
	W1205 11:42:20.791769    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:20.791832    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:20.801855    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:42:20.801868    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:20.801876    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:42:20.812674    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:42:20.812768    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:42:20.844751    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:20.844759    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:20.880075    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:42:20.880090    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:42:20.892219    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:42:20.892229    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:42:20.903782    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:42:20.903793    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:42:20.914874    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:20.914883    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:20.919749    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:42:20.919757    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:42:20.935133    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:42:20.935143    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:42:20.950187    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:42:20.950204    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:42:20.961975    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:42:20.961988    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:20.974186    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:42:20.974201    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:42:20.988590    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:42:20.988600    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:42:21.000475    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:42:21.000489    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:42:21.017404    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:42:21.017414    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:42:21.033776    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:21.033790    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:21.058768    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:42:21.058781    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:42:21.072947    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:42:21.072958    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:42:21.111031    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:42:21.111043    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:42:21.122568    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:42:21.122580    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:42:21.133951    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:42:21.133961    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:42:21.133989    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:42:21.133994    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:42:21.133999    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:42:21.134002    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:42:21.134005    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:42:31.137975    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:36.140384    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:36.140640    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:36.157883    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:42:36.157982    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:36.170658    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:42:36.170744    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:36.181709    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:42:36.181791    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:36.192149    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:42:36.192226    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:36.202770    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:42:36.202848    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:36.217106    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:42:36.217182    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:36.227921    9824 logs.go:282] 0 containers: []
	W1205 11:42:36.227933    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:36.227999    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:36.238576    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:42:36.238592    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:42:36.238598    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:42:36.279673    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:42:36.279686    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:42:36.297906    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:42:36.297919    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:42:36.312187    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:42:36.312198    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:42:36.325472    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:42:36.325482    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:42:36.336948    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:42:36.336958    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:42:36.352237    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:36.352250    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:36.356766    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:36.356774    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:36.393323    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:36.393339    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:36.419836    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:42:36.419843    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:42:36.431876    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:42:36.431886    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:36.444127    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:36.444139    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:42:36.454085    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:42:36.454180    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:42:36.486482    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:42:36.486490    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:42:36.497649    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:42:36.497664    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:42:36.513649    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:42:36.513660    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:42:36.535089    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:42:36.535100    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:42:36.552000    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:42:36.552009    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:42:36.563192    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:42:36.563204    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:42:36.578064    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:42:36.578074    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:42:36.590499    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:42:36.590508    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:42:36.590533    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:42:36.590537    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:42:36.590541    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:42:36.590544    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:42:36.590546    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:42:46.594627    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:51.597007    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:51.597448    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:51.627155    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:42:51.627281    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:51.644733    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:42:51.644825    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:51.658367    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:42:51.658458    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:51.670024    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:42:51.670094    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:51.680734    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:42:51.680801    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:51.691678    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:42:51.691746    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:51.701899    9824 logs.go:282] 0 containers: []
	W1205 11:42:51.701910    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:51.701970    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:51.712470    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:42:51.712485    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:42:51.712490    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:42:51.725192    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:42:51.725205    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:42:51.736669    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:51.736679    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:51.761148    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:51.761156    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:51.765891    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:51.765896    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:51.803052    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:42:51.803063    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:42:51.840309    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:42:51.840323    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:42:51.854576    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:42:51.854591    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:42:51.871778    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:42:51.871788    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:42:51.883054    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:42:51.883064    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:42:51.898515    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:42:51.898527    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:42:51.915054    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:42:51.915069    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:42:51.934578    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:42:51.934592    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:42:51.946117    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:42:51.946127    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:42:51.960595    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:51.960605    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:42:51.972208    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:42:51.972304    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:42:52.004966    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:42:52.004971    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:42:52.020419    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:42:52.020434    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:42:52.032254    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:42:52.032266    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:42:52.043764    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:42:52.043778    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:52.055438    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:42:52.055452    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:42:52.055478    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:42:52.055483    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:42:52.055486    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:42:52.055490    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:42:52.055492    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:43:02.059579    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:07.061958    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:07.062119    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:07.074363    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:43:07.074456    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:07.085262    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:43:07.085344    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:07.095691    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:43:07.095767    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:07.106527    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:43:07.106613    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:07.117113    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:43:07.117183    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:07.127890    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:43:07.127963    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:07.138122    9824 logs.go:282] 0 containers: []
	W1205 11:43:07.138134    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:07.138206    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:07.148963    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:43:07.148981    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:43:07.148986    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:43:07.160344    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:07.160356    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:07.196049    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:43:07.196061    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:43:07.214742    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:43:07.214752    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:43:07.252047    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:43:07.252057    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:43:07.266557    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:43:07.266570    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:43:07.278605    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:43:07.278616    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:43:07.289749    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:07.289759    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:07.313372    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:07.313379    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:07.318075    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:43:07.318081    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:43:07.336083    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:43:07.336096    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:43:07.351745    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:43:07.351756    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:43:07.363417    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:43:07.363428    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:43:07.374828    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:43:07.374840    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:43:07.389028    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:43:07.389040    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:43:07.400368    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:43:07.400378    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:43:07.415539    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:43:07.415550    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:43:07.428366    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:43:07.428375    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:07.440950    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:07.440965    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:43:07.451903    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:43:07.451997    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:43:07.484366    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:43:07.484374    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:43:07.484396    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:43:07.484400    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:43:07.484403    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:43:07.484406    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:43:07.484409    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:43:17.488530    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:22.490896    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:22.491125    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:22.512550    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:43:22.512659    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:22.526837    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:43:22.526921    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:22.544629    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:43:22.544719    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:22.555001    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:43:22.555079    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:22.569802    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:43:22.569877    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:22.580473    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:43:22.580544    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:22.590894    9824 logs.go:282] 0 containers: []
	W1205 11:43:22.590907    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:22.590978    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:22.601192    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:43:22.601216    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:43:22.601222    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:43:22.615758    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:43:22.615766    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:43:22.627449    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:43:22.627461    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:43:22.638948    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:43:22.638959    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:43:22.654220    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:43:22.654229    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:43:22.665744    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:22.665755    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:43:22.675361    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:43:22.675461    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:43:22.707249    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:22.707256    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:22.711695    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:43:22.711702    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:43:22.749823    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:43:22.749834    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:43:22.764067    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:43:22.764077    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:43:22.776066    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:43:22.776077    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:43:22.794623    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:22.794634    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:22.817867    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:22.817879    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:22.854141    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:43:22.854151    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:43:22.866114    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:43:22.866125    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:43:22.886631    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:43:22.886642    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:22.900617    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:43:22.900630    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:43:22.915725    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:43:22.915736    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:43:22.928241    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:43:22.928252    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:43:22.946146    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:43:22.946159    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:43:22.946191    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:43:22.946195    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:43:22.946200    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:43:22.946208    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:43:22.946210    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:43:32.948841    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:37.951174    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:37.951308    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:37.964756    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:43:37.964845    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:37.976664    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:43:37.976753    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:37.987842    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:43:37.987919    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:37.998068    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:43:37.998146    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:38.009014    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:43:38.009094    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:38.020172    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:43:38.020248    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:38.030553    9824 logs.go:282] 0 containers: []
	W1205 11:43:38.030566    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:38.030637    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:38.040862    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:43:38.040881    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:38.040887    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:38.065692    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:38.065699    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:38.102676    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:43:38.102687    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:43:38.114141    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:43:38.114152    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:43:38.132715    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:43:38.132728    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:43:38.150568    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:43:38.150579    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:43:38.166725    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:43:38.166736    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:43:38.185314    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:43:38.185324    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:43:38.196797    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:38.196808    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:38.202063    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:43:38.202070    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:43:38.217432    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:43:38.217442    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:43:38.233077    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:43:38.233088    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:43:38.252813    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:43:38.252824    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:43:38.269326    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:38.269336    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:43:38.278131    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:43:38.278229    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:43:38.311049    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:43:38.311058    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:43:38.328008    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:43:38.328017    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:43:38.374586    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:43:38.374597    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:43:38.388788    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:43:38.388799    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:43:38.401121    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:43:38.401132    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:38.413210    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:43:38.413220    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:43:38.413247    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:43:38.413253    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:43:38.413257    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:43:38.413260    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:43:38.413263    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:43:48.417373    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:53.419806    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:53.420319    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:53.457097    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:43:53.457252    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:53.477635    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:43:53.477741    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:53.492599    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:43:53.492681    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:53.504897    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:43:53.504980    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:53.523924    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:43:53.524000    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:53.534846    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:43:53.534917    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:53.545045    9824 logs.go:282] 0 containers: []
	W1205 11:43:53.545057    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:53.545118    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:53.555214    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:43:53.555227    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:43:53.555232    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:43:53.569253    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:43:53.569264    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:43:53.580501    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:43:53.580512    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:43:53.595742    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:43:53.595756    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:43:53.612798    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:43:53.612809    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:53.625342    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:53.625353    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:43:53.633994    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:43:53.634091    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:43:53.665576    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:53.665582    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:53.670514    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:43:53.670524    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:43:53.691181    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:43:53.691191    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:43:53.702974    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:43:53.702989    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:43:53.714777    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:43:53.714788    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:43:53.726593    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:43:53.726604    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:43:53.738324    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:53.738335    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:53.762313    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:53.762320    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:53.798504    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:43:53.798515    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:43:53.842313    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:43:53.842324    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:43:53.856441    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:43:53.856452    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:43:53.871904    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:43:53.871915    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:43:53.883026    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:43:53.883038    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:43:53.898930    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:43:53.898941    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:43:53.898966    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:43:53.898973    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:43:53.898978    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:43:53.899026    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:43:53.899068    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:44:03.903219    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:08.905203    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:08.905634    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:44:08.939226    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:44:08.939386    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:44:08.960172    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:44:08.960285    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:44:08.974608    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:44:08.974703    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:44:08.989947    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:44:08.990017    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:44:09.001232    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:44:09.001318    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:44:09.016360    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:44:09.016442    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:44:09.027047    9824 logs.go:282] 0 containers: []
	W1205 11:44:09.027064    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:44:09.027135    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:44:09.038401    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:44:09.038419    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:44:09.038424    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:44:09.043379    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:44:09.043388    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:44:09.082167    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:44:09.082180    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:44:09.094510    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:44:09.094523    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:44:09.110940    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:44:09.110951    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:44:09.121662    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:44:09.121760    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:44:09.153851    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:44:09.153861    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:44:09.165434    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:44:09.165446    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:44:09.177433    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:44:09.177447    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:44:09.198102    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:44:09.198113    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:44:09.215945    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:44:09.215958    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:44:09.240219    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:44:09.240228    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:44:09.254524    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:44:09.254537    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:44:09.266788    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:44:09.266803    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:44:09.278814    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:44:09.278828    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:44:09.292956    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:44:09.292971    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:44:09.309045    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:44:09.309056    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:44:09.325397    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:44:09.325410    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:44:09.337205    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:44:09.337214    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:44:09.349628    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:44:09.349642    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:44:09.384957    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:44:09.384971    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:44:09.385002    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:44:09.385008    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:44:09.385013    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:44:09.385017    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:44:09.385037    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:44:19.389124    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:24.391414    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:24.391668    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:44:24.410409    9824 logs.go:282] 2 containers: [76179c9d7fd3 34c7cd6edc31]
	I1205 11:44:24.410512    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:44:24.424757    9824 logs.go:282] 2 containers: [2c527ab4f84a be4390827229]
	I1205 11:44:24.424848    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:44:24.436905    9824 logs.go:282] 2 containers: [dcd06e9c8cf4 02ddf96cec5c]
	I1205 11:44:24.436987    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:44:24.454849    9824 logs.go:282] 2 containers: [99fcbbe051b7 8388ea218d97]
	I1205 11:44:24.454947    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:44:24.465295    9824 logs.go:282] 2 containers: [0baa4edbb64d 2ff36a3bbfe5]
	I1205 11:44:24.465378    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:44:24.475971    9824 logs.go:282] 2 containers: [f9f76c76edfe 2e48bb217d7a]
	I1205 11:44:24.476050    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:44:24.485963    9824 logs.go:282] 0 containers: []
	W1205 11:44:24.485978    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:44:24.486046    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:44:24.496514    9824 logs.go:282] 2 containers: [30733564d2ad ede713ea0239]
	I1205 11:44:24.496536    9824 logs.go:123] Gathering logs for storage-provisioner [ede713ea0239] ...
	I1205 11:44:24.496541    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede713ea0239"
	I1205 11:44:24.507861    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:44:24.507875    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:44:24.517623    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:44:24.517718    9824 logs.go:138] Found kubelet problem: Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:44:24.549295    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:44:24.549303    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:44:24.585868    9824 logs.go:123] Gathering logs for kube-apiserver [76179c9d7fd3] ...
	I1205 11:44:24.585878    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76179c9d7fd3"
	I1205 11:44:24.600002    9824 logs.go:123] Gathering logs for etcd [2c527ab4f84a] ...
	I1205 11:44:24.600013    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c527ab4f84a"
	I1205 11:44:24.621333    9824 logs.go:123] Gathering logs for kube-controller-manager [f9f76c76edfe] ...
	I1205 11:44:24.621345    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9f76c76edfe"
	I1205 11:44:24.638669    9824 logs.go:123] Gathering logs for storage-provisioner [30733564d2ad] ...
	I1205 11:44:24.638680    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30733564d2ad"
	I1205 11:44:24.650196    9824 logs.go:123] Gathering logs for kube-apiserver [34c7cd6edc31] ...
	I1205 11:44:24.650209    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34c7cd6edc31"
	I1205 11:44:24.688719    9824 logs.go:123] Gathering logs for etcd [be4390827229] ...
	I1205 11:44:24.688729    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4390827229"
	I1205 11:44:24.707417    9824 logs.go:123] Gathering logs for kube-scheduler [8388ea218d97] ...
	I1205 11:44:24.707427    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8388ea218d97"
	I1205 11:44:24.722900    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:44:24.722910    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:44:24.735876    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:44:24.735888    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:44:24.740098    9824 logs.go:123] Gathering logs for coredns [dcd06e9c8cf4] ...
	I1205 11:44:24.740104    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcd06e9c8cf4"
	I1205 11:44:24.751908    9824 logs.go:123] Gathering logs for kube-scheduler [99fcbbe051b7] ...
	I1205 11:44:24.751920    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fcbbe051b7"
	I1205 11:44:24.763585    9824 logs.go:123] Gathering logs for kube-controller-manager [2e48bb217d7a] ...
	I1205 11:44:24.763595    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e48bb217d7a"
	I1205 11:44:24.779429    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:44:24.779439    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:44:24.801728    9824 logs.go:123] Gathering logs for coredns [02ddf96cec5c] ...
	I1205 11:44:24.801735    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02ddf96cec5c"
	I1205 11:44:24.812959    9824 logs.go:123] Gathering logs for kube-proxy [0baa4edbb64d] ...
	I1205 11:44:24.812969    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0baa4edbb64d"
	I1205 11:44:24.824675    9824 logs.go:123] Gathering logs for kube-proxy [2ff36a3bbfe5] ...
	I1205 11:44:24.824685    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff36a3bbfe5"
	I1205 11:44:24.836878    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:44:24.836887    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:44:24.836915    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:44:24.836920    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: W1205 19:40:02.022627    1928 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:44:24.836924    9824 out.go:270]   Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:40:02 running-upgrade-842000 kubelet[1928]: E1205 19:40:02.022648    1928 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:44:24.836929    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:44:24.836943    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:44:34.839367    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:39.841550    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:39.841681    9824 kubeadm.go:597] duration metric: took 4m8.669344s to restartPrimaryControlPlane
	W1205 11:44:39.841752    9824 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 11:44:39.841789    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 11:44:40.940943    9824 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.099149334s)
	I1205 11:44:40.941014    9824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 11:44:40.946051    9824 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:44:40.948933    9824 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:44:40.951588    9824 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:44:40.951595    9824 kubeadm.go:157] found existing configuration files:
	
	I1205 11:44:40.951626    9824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/admin.conf
	I1205 11:44:40.954826    9824 kubeadm.go:163] "https://control-plane.minikube.internal:56581" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:44:40.954863    9824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:44:40.958089    9824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/kubelet.conf
	I1205 11:44:40.960755    9824 kubeadm.go:163] "https://control-plane.minikube.internal:56581" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:44:40.960792    9824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:44:40.963369    9824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/controller-manager.conf
	I1205 11:44:40.966502    9824 kubeadm.go:163] "https://control-plane.minikube.internal:56581" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:44:40.966537    9824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:44:40.969617    9824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/scheduler.conf
	I1205 11:44:40.972250    9824 kubeadm.go:163] "https://control-plane.minikube.internal:56581" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56581 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:44:40.972285    9824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:44:40.975080    9824 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 11:44:40.993304    9824 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1205 11:44:40.993340    9824 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 11:44:41.040785    9824 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 11:44:41.040836    9824 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 11:44:41.040890    9824 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 11:44:41.090988    9824 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 11:44:41.095155    9824 out.go:235]   - Generating certificates and keys ...
	I1205 11:44:41.095195    9824 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 11:44:41.095233    9824 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 11:44:41.095283    9824 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 11:44:41.095316    9824 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 11:44:41.095352    9824 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 11:44:41.095382    9824 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 11:44:41.095417    9824 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 11:44:41.095457    9824 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 11:44:41.095493    9824 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 11:44:41.095527    9824 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 11:44:41.095545    9824 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 11:44:41.095577    9824 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 11:44:41.302735    9824 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 11:44:41.394160    9824 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 11:44:41.473503    9824 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 11:44:41.611648    9824 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 11:44:41.643111    9824 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 11:44:41.643484    9824 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 11:44:41.643597    9824 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 11:44:41.728935    9824 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 11:44:41.733076    9824 out.go:235]   - Booting up control plane ...
	I1205 11:44:41.733124    9824 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 11:44:41.733174    9824 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 11:44:41.733226    9824 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 11:44:41.733264    9824 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 11:44:41.733350    9824 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 11:44:46.234700    9824 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502738 seconds
	I1205 11:44:46.234811    9824 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 11:44:46.238674    9824 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 11:44:46.751606    9824 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 11:44:46.751938    9824 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-842000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 11:44:47.257273    9824 kubeadm.go:310] [bootstrap-token] Using token: ji6a6o.a9vxr738qez5wudf
	I1205 11:44:47.260072    9824 out.go:235]   - Configuring RBAC rules ...
	I1205 11:44:47.260135    9824 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 11:44:47.260184    9824 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 11:44:47.267621    9824 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 11:44:47.268517    9824 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 11:44:47.269392    9824 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 11:44:47.270338    9824 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 11:44:47.273821    9824 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 11:44:47.447820    9824 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 11:44:47.661727    9824 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 11:44:47.662168    9824 kubeadm.go:310] 
	I1205 11:44:47.662200    9824 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 11:44:47.662207    9824 kubeadm.go:310] 
	I1205 11:44:47.662251    9824 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 11:44:47.662259    9824 kubeadm.go:310] 
	I1205 11:44:47.662272    9824 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 11:44:47.662311    9824 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 11:44:47.662339    9824 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 11:44:47.662343    9824 kubeadm.go:310] 
	I1205 11:44:47.662373    9824 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 11:44:47.662380    9824 kubeadm.go:310] 
	I1205 11:44:47.662403    9824 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 11:44:47.662407    9824 kubeadm.go:310] 
	I1205 11:44:47.662436    9824 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 11:44:47.662478    9824 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 11:44:47.662525    9824 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 11:44:47.662528    9824 kubeadm.go:310] 
	I1205 11:44:47.662580    9824 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 11:44:47.662621    9824 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 11:44:47.662625    9824 kubeadm.go:310] 
	I1205 11:44:47.662679    9824 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ji6a6o.a9vxr738qez5wudf \
	I1205 11:44:47.662734    9824 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4d1b162eb3608111c477a7c870488ffbf3cfc36b3f1c56af279a8c3b5e43f1b \
	I1205 11:44:47.662747    9824 kubeadm.go:310] 	--control-plane 
	I1205 11:44:47.662764    9824 kubeadm.go:310] 
	I1205 11:44:47.662808    9824 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 11:44:47.662815    9824 kubeadm.go:310] 
	I1205 11:44:47.662866    9824 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ji6a6o.a9vxr738qez5wudf \
	I1205 11:44:47.662924    9824 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4d1b162eb3608111c477a7c870488ffbf3cfc36b3f1c56af279a8c3b5e43f1b 
	I1205 11:44:47.662973    9824 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 11:44:47.663082    9824 cni.go:84] Creating CNI manager for ""
	I1205 11:44:47.663090    9824 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:44:47.667478    9824 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 11:44:47.674416    9824 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 11:44:47.677708    9824 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 11:44:47.683521    9824 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 11:44:47.683631    9824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-842000 minikube.k8s.io/updated_at=2024_12_05T11_44_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=running-upgrade-842000 minikube.k8s.io/primary=true
	I1205 11:44:47.683633    9824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 11:44:47.716005    9824 kubeadm.go:1113] duration metric: took 32.454416ms to wait for elevateKubeSystemPrivileges
	I1205 11:44:47.723594    9824 ops.go:34] apiserver oom_adj: -16
	I1205 11:44:47.723606    9824 kubeadm.go:394] duration metric: took 4m16.56524725s to StartCluster
	I1205 11:44:47.723617    9824 settings.go:142] acquiring lock: {Name:mk929d066faf20e4c3c6b7a024ba4d845a405894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:44:47.723725    9824 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:44:47.724126    9824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/kubeconfig: {Name:mk997d47fa87fe6dec2166788b387274f153b2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:44:47.724313    9824 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:44:47.724372    9824 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 11:44:47.724415    9824 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-842000"
	I1205 11:44:47.724423    9824 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-842000"
	W1205 11:44:47.724426    9824 addons.go:243] addon storage-provisioner should already be in state true
	I1205 11:44:47.724435    9824 host.go:66] Checking if "running-upgrade-842000" exists ...
	I1205 11:44:47.724450    9824 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-842000"
	I1205 11:44:47.724475    9824 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-842000"
	I1205 11:44:47.724664    9824 config.go:182] Loaded profile config "running-upgrade-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:44:47.725683    9824 kapi.go:59] client config for running-upgrade-842000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/running-upgrade-842000/client.key", CAFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102e5b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:44:47.725813    9824 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-842000"
	W1205 11:44:47.725820    9824 addons.go:243] addon default-storageclass should already be in state true
	I1205 11:44:47.725827    9824 host.go:66] Checking if "running-upgrade-842000" exists ...
	I1205 11:44:47.728219    9824 out.go:177] * Verifying Kubernetes components...
	I1205 11:44:47.728622    9824 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 11:44:47.732481    9824 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 11:44:47.732488    9824 sshutil.go:53] new ssh client: &{IP:localhost Port:56489 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/running-upgrade-842000/id_rsa Username:docker}
	I1205 11:44:47.738369    9824 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:44:47.744465    9824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:44:47.750415    9824 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:44:47.750422    9824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 11:44:47.750430    9824 sshutil.go:53] new ssh client: &{IP:localhost Port:56489 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/running-upgrade-842000/id_rsa Username:docker}
	I1205 11:44:47.837183    9824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:44:47.841651    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 11:44:47.844079    9824 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:44:47.844132    9824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:44:47.864146    9824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:44:48.183544    9824 api_server.go:72] duration metric: took 459.217917ms to wait for apiserver process to appear ...
	I1205 11:44:48.183558    9824 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:44:48.183569    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:48.183724    9824 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 11:44:48.183734    9824 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 11:44:53.185623    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:53.185674    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:58.185859    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:58.185888    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:03.186639    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:03.186665    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:08.187094    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:08.187117    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:13.187743    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:13.187784    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1205 11:45:18.185566    9824 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1205 11:45:18.188586    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:18.188598    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:18.189800    9824 out.go:177] * Enabled addons: storage-provisioner
	I1205 11:45:18.197719    9824 addons.go:510] duration metric: took 30.473627375s for enable addons: enabled=[storage-provisioner]
	I1205 11:45:23.189584    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:23.189618    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:28.190968    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:28.191014    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:33.192776    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:33.192814    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:38.194934    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:38.194953    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:43.197112    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:43.197165    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:48.199461    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:48.199569    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:45:48.219485    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:45:48.219575    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:45:48.229621    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:45:48.229709    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:45:48.240076    9824 logs.go:282] 2 containers: [b8e3f20adc7f 63e6e33719d5]
	I1205 11:45:48.240155    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:45:48.250784    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:45:48.250851    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:45:48.262044    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:45:48.262121    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:45:48.272883    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:45:48.272954    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:45:48.283161    9824 logs.go:282] 0 containers: []
	W1205 11:45:48.283172    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:45:48.283234    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:45:48.293963    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:45:48.293978    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:45:48.293983    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:45:48.330314    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:45:48.330325    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:45:48.348295    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:45:48.348309    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:45:48.362852    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:45:48.362862    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:45:48.374719    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:45:48.374730    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:45:48.385789    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:45:48.385802    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:45:48.408626    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:45:48.408636    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:45:48.440987    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:45:48.441082    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:45:48.442519    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:45:48.442523    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:45:48.446999    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:45:48.447005    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:45:48.458404    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:45:48.458418    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:45:48.477163    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:45:48.477175    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:45:48.489478    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:45:48.489490    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:45:48.510016    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:45:48.510028    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:45:48.521933    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:45:48.521943    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:45:48.521973    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:45:48.521977    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:45:48.521980    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:45:48.521984    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:45:48.521986    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:45:58.524393    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:46:03.526747    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:46:03.526969    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:46:03.553536    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:46:03.553662    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:46:03.569102    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:46:03.569199    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:46:03.582205    9824 logs.go:282] 2 containers: [b8e3f20adc7f 63e6e33719d5]
	I1205 11:46:03.582289    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:46:03.593680    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:46:03.593757    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:46:03.604150    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:46:03.604225    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:46:03.614396    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:46:03.614467    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:46:03.624856    9824 logs.go:282] 0 containers: []
	W1205 11:46:03.624870    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:46:03.624935    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:46:03.635371    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:46:03.635389    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:46:03.635395    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:46:03.640315    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:46:03.640323    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:46:03.680153    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:46:03.680166    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:46:03.698386    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:46:03.698398    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:46:03.715469    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:46:03.715479    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:46:03.732799    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:46:03.732810    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:46:03.756441    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:46:03.756449    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:46:03.788124    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:46:03.788217    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:46:03.789687    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:46:03.789694    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:46:03.806019    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:46:03.806029    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:46:03.817962    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:46:03.817972    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:46:03.830154    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:46:03.830165    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:46:03.845424    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:46:03.845435    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:46:03.856720    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:46:03.856730    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:46:03.871321    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:03.871330    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:46:03.871356    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:46:03.871361    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:46:03.871365    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:46:03.871369    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:03.871372    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:46:13.874436    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:46:18.876774    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:46:18.876943    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:46:18.889269    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:46:18.889356    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:46:18.900086    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:46:18.900168    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:46:18.910915    9824 logs.go:282] 2 containers: [b8e3f20adc7f 63e6e33719d5]
	I1205 11:46:18.910988    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:46:18.921085    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:46:18.921151    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:46:18.931488    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:46:18.931563    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:46:18.942229    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:46:18.942305    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:46:18.954188    9824 logs.go:282] 0 containers: []
	W1205 11:46:18.954200    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:46:18.954258    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:46:18.964810    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:46:18.964825    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:46:18.964830    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:46:18.976443    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:46:18.976452    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:46:19.001185    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:46:19.001193    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:46:19.033093    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:46:19.033186    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:46:19.034656    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:46:19.034661    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:46:19.038988    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:46:19.038995    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:46:19.074434    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:46:19.074445    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:46:19.088982    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:46:19.088993    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:46:19.100512    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:46:19.100525    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:46:19.127611    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:46:19.127622    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:46:19.147141    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:46:19.147151    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:46:19.158951    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:46:19.158964    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:46:19.171231    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:46:19.171244    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:46:19.186417    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:46:19.186428    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:46:19.199283    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:19.199294    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:46:19.199320    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:46:19.199324    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:46:19.199328    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:46:19.199331    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:19.199334    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:46:29.203074    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:46:34.205312    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:46:34.205481    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:46:34.218974    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:46:34.219064    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:46:34.233789    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:46:34.233866    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:46:34.244605    9824 logs.go:282] 2 containers: [b8e3f20adc7f 63e6e33719d5]
	I1205 11:46:34.244671    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:46:34.255519    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:46:34.255581    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:46:34.266440    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:46:34.266523    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:46:34.277415    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:46:34.277492    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:46:34.288052    9824 logs.go:282] 0 containers: []
	W1205 11:46:34.288068    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:46:34.288133    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:46:34.298338    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:46:34.298353    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:46:34.298359    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:46:34.312081    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:46:34.312092    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:46:34.323983    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:46:34.323995    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:46:34.336110    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:46:34.336120    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:46:34.347895    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:46:34.347904    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:46:34.365758    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:46:34.365771    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:46:34.377129    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:46:34.377140    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:46:34.412325    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:46:34.412335    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:46:34.426898    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:46:34.426910    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:46:34.439594    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:46:34.439604    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:46:34.455289    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:46:34.455299    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:46:34.481537    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:46:34.481548    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:46:34.515245    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:46:34.515338    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:46:34.516807    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:46:34.516812    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:46:34.521341    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:34.521349    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:46:34.521373    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:46:34.521379    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:46:34.521383    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:46:34.521388    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:34.521391    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:46:44.525212    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:46:49.527564    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:46:49.527749    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:46:49.558007    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:46:49.558110    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:46:49.570969    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:46:49.571052    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:46:49.581757    9824 logs.go:282] 2 containers: [b8e3f20adc7f 63e6e33719d5]
	I1205 11:46:49.581835    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:46:49.592630    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:46:49.592703    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:46:49.603215    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:46:49.603293    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:46:49.613831    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:46:49.613906    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:46:49.623901    9824 logs.go:282] 0 containers: []
	W1205 11:46:49.623918    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:46:49.623987    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:46:49.634954    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:46:49.634969    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:46:49.634975    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:46:49.651547    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:46:49.651564    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:46:49.684097    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:46:49.684189    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:46:49.685659    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:46:49.685663    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:46:49.699577    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:46:49.699589    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:46:49.713325    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:46:49.713336    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:46:49.725265    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:46:49.725277    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:46:49.740473    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:46:49.740484    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:46:49.765998    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:46:49.766005    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:46:49.777158    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:46:49.777167    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:46:49.781726    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:46:49.781735    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:46:49.817032    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:46:49.817043    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:46:49.829098    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:46:49.829111    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:46:49.841258    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:46:49.841272    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:46:49.863782    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:49.863795    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:46:49.863823    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:46:49.863828    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:46:49.863844    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:46:49.863848    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:49.863855    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:46:59.866886    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:47:04.867254    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:47:04.867507    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:47:04.893092    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:47:04.893231    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:47:04.910056    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:47:04.910149    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:47:04.923057    9824 logs.go:282] 4 containers: [96252861cb2b daa0e911eff7 b8e3f20adc7f 63e6e33719d5]
	I1205 11:47:04.923138    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:47:04.934156    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:47:04.934231    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:47:04.944884    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:47:04.944962    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:47:04.955745    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:47:04.955819    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:47:04.968694    9824 logs.go:282] 0 containers: []
	W1205 11:47:04.968706    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:47:04.968777    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:47:04.979286    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:47:04.979304    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:47:04.979309    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:47:04.994070    9824 logs.go:123] Gathering logs for coredns [daa0e911eff7] ...
	I1205 11:47:04.994085    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa0e911eff7"
	I1205 11:47:05.005325    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:47:05.005339    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:47:05.020921    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:47:05.020930    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:47:05.035394    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:47:05.035403    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:47:05.053850    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:47:05.053860    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:47:05.065457    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:47:05.065468    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:47:05.100469    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:47:05.100481    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:47:05.113131    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:47:05.113141    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:47:05.137092    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:47:05.137100    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:47:05.170498    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:47:05.170591    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:47:05.172060    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:47:05.172067    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:47:05.176368    9824 logs.go:123] Gathering logs for coredns [96252861cb2b] ...
	I1205 11:47:05.176376    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96252861cb2b"
	I1205 11:47:05.188426    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:47:05.188437    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:47:05.200329    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:47:05.200340    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:47:05.212543    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:47:05.212554    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:47:05.225048    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:05.225059    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:47:05.225085    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:47:05.225089    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:47:05.225092    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:47:05.225121    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:05.225139    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:47:15.228001    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:47:20.230162    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:47:20.230252    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:47:20.241490    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:47:20.241561    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:47:20.252590    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:47:20.252670    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:47:20.263746    9824 logs.go:282] 4 containers: [96252861cb2b daa0e911eff7 b8e3f20adc7f 63e6e33719d5]
	I1205 11:47:20.263828    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:47:20.274115    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:47:20.274189    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:47:20.285128    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:47:20.285201    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:47:20.300320    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:47:20.300394    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:47:20.310570    9824 logs.go:282] 0 containers: []
	W1205 11:47:20.310583    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:47:20.310652    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:47:20.321160    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:47:20.321181    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:47:20.321188    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:47:20.337481    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:47:20.337495    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:47:20.352530    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:47:20.352541    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:47:20.374301    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:47:20.374311    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:47:20.385987    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:47:20.385997    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:47:20.399615    9824 logs.go:123] Gathering logs for coredns [96252861cb2b] ...
	I1205 11:47:20.399625    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96252861cb2b"
	I1205 11:47:20.411250    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:47:20.411259    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:47:20.423141    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:47:20.423152    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:47:20.435101    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:47:20.435111    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:47:20.468529    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:47:20.468628    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:47:20.470098    9824 logs.go:123] Gathering logs for coredns [daa0e911eff7] ...
	I1205 11:47:20.470105    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa0e911eff7"
	I1205 11:47:20.481888    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:47:20.481899    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:47:20.507554    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:47:20.507561    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:47:20.512418    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:47:20.512426    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:47:20.549375    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:47:20.549387    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:47:20.564185    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:47:20.564196    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:47:20.576696    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:20.576708    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:47:20.576738    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:47:20.576743    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:47:20.576746    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:47:20.576749    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:20.576751    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:47:30.580799    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:47:35.582560    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:47:35.582778    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:47:35.601435    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:47:35.601552    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:47:35.614865    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:47:35.614934    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:47:35.626711    9824 logs.go:282] 4 containers: [96252861cb2b daa0e911eff7 b8e3f20adc7f 63e6e33719d5]
	I1205 11:47:35.626792    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:47:35.641174    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:47:35.641259    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:47:35.651980    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:47:35.652057    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:47:35.662949    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:47:35.663024    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:47:35.675399    9824 logs.go:282] 0 containers: []
	W1205 11:47:35.675409    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:47:35.675478    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:47:35.685595    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:47:35.685614    9824 logs.go:123] Gathering logs for coredns [96252861cb2b] ...
	I1205 11:47:35.685620    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96252861cb2b"
	I1205 11:47:35.697533    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:47:35.697544    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:47:35.709557    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:47:35.709567    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:47:35.721176    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:47:35.721942    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:47:35.746287    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:47:35.746297    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:47:35.762169    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:47:35.762181    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:47:35.807998    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:47:35.808009    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:47:35.823045    9824 logs.go:123] Gathering logs for coredns [daa0e911eff7] ...
	I1205 11:47:35.823055    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa0e911eff7"
	I1205 11:47:35.834892    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:47:35.834903    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:47:35.849908    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:47:35.849918    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:47:35.861986    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:47:35.861997    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:47:35.894174    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:47:35.894266    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:47:35.895653    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:47:35.895658    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:47:35.900275    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:47:35.900281    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:47:35.916750    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:47:35.916760    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:47:35.935147    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:47:35.935156    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:47:35.947818    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:35.947828    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:47:35.947856    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:47:35.947861    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:47:35.947864    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:47:35.947867    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:35.947871    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:47:45.951939    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:47:50.954201    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:47:50.954311    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:47:50.969012    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:47:50.969094    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:47:50.986172    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:47:50.986250    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:47:50.996461    9824 logs.go:282] 4 containers: [96252861cb2b daa0e911eff7 b8e3f20adc7f 63e6e33719d5]
	I1205 11:47:50.996541    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:47:51.007328    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:47:51.007404    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:47:51.018086    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:47:51.018159    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:47:51.028536    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:47:51.028607    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:47:51.039296    9824 logs.go:282] 0 containers: []
	W1205 11:47:51.039308    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:47:51.039367    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:47:51.050104    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:47:51.050131    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:47:51.050137    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:47:51.067102    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:47:51.067112    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:47:51.081014    9824 logs.go:123] Gathering logs for coredns [daa0e911eff7] ...
	I1205 11:47:51.081024    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa0e911eff7"
	I1205 11:47:51.092675    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:47:51.092689    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:47:51.104577    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:47:51.104590    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:47:51.138540    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:47:51.138635    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:47:51.140107    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:47:51.140118    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:47:51.145108    9824 logs.go:123] Gathering logs for coredns [96252861cb2b] ...
	I1205 11:47:51.145117    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96252861cb2b"
	I1205 11:47:51.163044    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:47:51.163057    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:47:51.174482    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:47:51.174493    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:47:51.186260    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:47:51.186271    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:47:51.198726    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:47:51.198739    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:47:51.234219    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:47:51.234232    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:47:51.253279    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:47:51.253289    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:47:51.266509    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:47:51.266520    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:47:51.281496    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:47:51.281507    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:47:51.305099    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:51.305109    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:47:51.305160    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:47:51.305173    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:47:51.305178    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:47:51.305215    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:51.305218    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:48:01.309250    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:48:06.311503    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:48:06.311618    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:48:06.322388    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:48:06.322479    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:48:06.333999    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:48:06.334090    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:48:06.344911    9824 logs.go:282] 4 containers: [96252861cb2b daa0e911eff7 b8e3f20adc7f 63e6e33719d5]
	I1205 11:48:06.344995    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:48:06.355442    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:48:06.355515    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:48:06.366198    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:48:06.366275    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:48:06.376958    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:48:06.377027    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:48:06.387531    9824 logs.go:282] 0 containers: []
	W1205 11:48:06.387541    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:48:06.387610    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:48:06.398037    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:48:06.398056    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:48:06.398061    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:48:06.411923    9824 logs.go:123] Gathering logs for coredns [96252861cb2b] ...
	I1205 11:48:06.411933    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96252861cb2b"
	I1205 11:48:06.424018    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:48:06.424027    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:48:06.436115    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:48:06.436130    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:48:06.451411    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:48:06.451422    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:48:06.463280    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:48:06.463294    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:48:06.475817    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:48:06.475831    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:48:06.512128    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:48:06.512137    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:48:06.531176    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:48:06.531186    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:48:06.547546    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:48:06.547558    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:48:06.572416    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:48:06.572426    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:48:06.577244    9824 logs.go:123] Gathering logs for coredns [daa0e911eff7] ...
	I1205 11:48:06.577251    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa0e911eff7"
	I1205 11:48:06.589173    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:48:06.589184    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:48:06.623089    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:48:06.623181    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:48:06.624613    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:48:06.624618    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:48:06.636314    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:48:06.636325    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:48:06.653905    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:48:06.653916    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:48:06.653946    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:48:06.653951    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:48:06.653955    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:48:06.653959    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:48:06.653962    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:48:16.657092    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:48:21.659434    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:48:21.659651    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:48:21.680845    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:48:21.680956    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:48:21.695240    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:48:21.695307    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:48:21.708386    9824 logs.go:282] 4 containers: [96252861cb2b daa0e911eff7 b8e3f20adc7f 63e6e33719d5]
	I1205 11:48:21.708464    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:48:21.718501    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:48:21.718578    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:48:21.728853    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:48:21.728926    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:48:21.739589    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:48:21.739660    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:48:21.749759    9824 logs.go:282] 0 containers: []
	W1205 11:48:21.749777    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:48:21.749843    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:48:21.760474    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:48:21.760490    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:48:21.760495    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:48:21.771859    9824 logs.go:123] Gathering logs for coredns [daa0e911eff7] ...
	I1205 11:48:21.771869    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa0e911eff7"
	I1205 11:48:21.783583    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:48:21.783594    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:48:21.795662    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:48:21.795673    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:48:21.807177    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:48:21.807190    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:48:21.824519    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:48:21.824530    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:48:21.836418    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:48:21.836431    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:48:21.841333    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:48:21.841342    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:48:21.855427    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:48:21.855436    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:48:21.869200    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:48:21.869212    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:48:21.884410    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:48:21.884423    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:48:21.895868    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:48:21.895878    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:48:21.923957    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:48:21.923967    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:48:21.956460    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:48:21.956552    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:48:21.958016    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:48:21.958025    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:48:21.997131    9824 logs.go:123] Gathering logs for coredns [96252861cb2b] ...
	I1205 11:48:21.997142    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96252861cb2b"
	I1205 11:48:22.009237    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:48:22.009247    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:48:22.009272    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:48:22.009279    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:48:22.009291    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:48:22.009294    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:48:22.009343    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:48:32.013222    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:48:37.015744    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:48:37.015978    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:48:37.035511    9824 logs.go:282] 1 containers: [8f56a1b66d02]
	I1205 11:48:37.035616    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:48:37.050796    9824 logs.go:282] 1 containers: [c62afbee00dd]
	I1205 11:48:37.050868    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:48:37.062543    9824 logs.go:282] 4 containers: [96252861cb2b daa0e911eff7 b8e3f20adc7f 63e6e33719d5]
	I1205 11:48:37.062619    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:48:37.073180    9824 logs.go:282] 1 containers: [fa00c18980ef]
	I1205 11:48:37.073252    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:48:37.084908    9824 logs.go:282] 1 containers: [1b5d0c2d3f9c]
	I1205 11:48:37.084981    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:48:37.096003    9824 logs.go:282] 1 containers: [ccf16edecae4]
	I1205 11:48:37.096075    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:48:37.106220    9824 logs.go:282] 0 containers: []
	W1205 11:48:37.106237    9824 logs.go:284] No container was found matching "kindnet"
	I1205 11:48:37.106301    9824 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:48:37.116858    9824 logs.go:282] 1 containers: [950bf029735f]
	I1205 11:48:37.116874    9824 logs.go:123] Gathering logs for dmesg ...
	I1205 11:48:37.116880    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:48:37.121951    9824 logs.go:123] Gathering logs for coredns [96252861cb2b] ...
	I1205 11:48:37.121961    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96252861cb2b"
	I1205 11:48:37.133387    9824 logs.go:123] Gathering logs for coredns [63e6e33719d5] ...
	I1205 11:48:37.133397    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e6e33719d5"
	I1205 11:48:37.146693    9824 logs.go:123] Gathering logs for etcd [c62afbee00dd] ...
	I1205 11:48:37.146703    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c62afbee00dd"
	I1205 11:48:37.160790    9824 logs.go:123] Gathering logs for Docker ...
	I1205 11:48:37.160800    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:48:37.185319    9824 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:48:37.185328    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:48:37.219492    9824 logs.go:123] Gathering logs for kube-apiserver [8f56a1b66d02] ...
	I1205 11:48:37.219502    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f56a1b66d02"
	I1205 11:48:37.233966    9824 logs.go:123] Gathering logs for coredns [daa0e911eff7] ...
	I1205 11:48:37.233977    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daa0e911eff7"
	I1205 11:48:37.246115    9824 logs.go:123] Gathering logs for coredns [b8e3f20adc7f] ...
	I1205 11:48:37.246125    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e3f20adc7f"
	I1205 11:48:37.262348    9824 logs.go:123] Gathering logs for kube-proxy [1b5d0c2d3f9c] ...
	I1205 11:48:37.262360    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b5d0c2d3f9c"
	I1205 11:48:37.273688    9824 logs.go:123] Gathering logs for kubelet ...
	I1205 11:48:37.273701    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:48:37.307992    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:48:37.308092    9824 logs.go:138] Found kubelet problem: Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:48:37.309544    9824 logs.go:123] Gathering logs for kube-scheduler [fa00c18980ef] ...
	I1205 11:48:37.309550    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00c18980ef"
	I1205 11:48:37.325500    9824 logs.go:123] Gathering logs for kube-controller-manager [ccf16edecae4] ...
	I1205 11:48:37.325512    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccf16edecae4"
	I1205 11:48:37.343935    9824 logs.go:123] Gathering logs for storage-provisioner [950bf029735f] ...
	I1205 11:48:37.343945    9824 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 950bf029735f"
	I1205 11:48:37.355051    9824 logs.go:123] Gathering logs for container status ...
	I1205 11:48:37.355064    9824 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:48:37.367022    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:48:37.367032    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:48:37.367058    9824 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:48:37.367062    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	W1205 11:48:37.367066    9824 out.go:270]   Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	  Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	I1205 11:48:37.367069    9824 out.go:358] Setting ErrFile to fd 2...
	I1205 11:48:37.367072    9824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:48:47.371156    9824 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:48:52.373482    9824 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:48:52.377820    9824 out.go:201] 
	W1205 11:48:52.380928    9824 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1205 11:48:52.380943    9824 out.go:270] * 
	* 
	W1205 11:48:52.381496    9824 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:48:52.392715    9824 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-842000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-05 11:48:52.455964 -0800 PST m=+1292.435043835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-842000 -n running-upgrade-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-842000 -n running-upgrade-842000: exit status 2 (15.639099792s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-842000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-907000 sudo cat                            | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo cat                            | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo                                | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo                                | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo                                | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo cat                            | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo cat                            | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo                                | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo                                | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo                                | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo find                           | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-907000 sudo crio                           | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-907000                                     | cilium-907000             | jenkins | v1.34.0 | 05 Dec 24 11:38 PST | 05 Dec 24 11:38 PST |
	| start   | -p kubernetes-upgrade-703000                         | kubernetes-upgrade-703000 | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-856000                             | offline-docker-856000     | jenkins | v1.34.0 | 05 Dec 24 11:38 PST | 05 Dec 24 11:38 PST |
	| start   | -p stopped-upgrade-050000                            | minikube                  | jenkins | v1.26.0 | 05 Dec 24 11:38 PST | 05 Dec 24 11:39 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-703000                         | kubernetes-upgrade-703000 | jenkins | v1.34.0 | 05 Dec 24 11:38 PST | 05 Dec 24 11:38 PST |
	| start   | -p kubernetes-upgrade-703000                         | kubernetes-upgrade-703000 | jenkins | v1.34.0 | 05 Dec 24 11:38 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-703000                         | kubernetes-upgrade-703000 | jenkins | v1.34.0 | 05 Dec 24 11:38 PST | 05 Dec 24 11:38 PST |
	| start   | -p running-upgrade-842000                            | minikube                  | jenkins | v1.26.0 | 05 Dec 24 11:38 PST | 05 Dec 24 11:39 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-050000 stop                          | minikube                  | jenkins | v1.26.0 | 05 Dec 24 11:39 PST | 05 Dec 24 11:39 PST |
	| start   | -p stopped-upgrade-050000                            | stopped-upgrade-050000    | jenkins | v1.34.0 | 05 Dec 24 11:39 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-842000                            | running-upgrade-842000    | jenkins | v1.34.0 | 05 Dec 24 11:39 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-050000                            | stopped-upgrade-050000    | jenkins | v1.34.0 | 05 Dec 24 11:48 PST | 05 Dec 24 11:49 PST |
	| start   | -p pause-676000 --memory=2048                        | pause-676000              | jenkins | v1.34.0 | 05 Dec 24 11:49 PST |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 11:49:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 11:49:00.993009   10018 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:49:00.993153   10018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:49:00.993155   10018 out.go:358] Setting ErrFile to fd 2...
	I1205 11:49:00.993156   10018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:49:00.993262   10018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:49:00.994494   10018 out.go:352] Setting JSON to false
	I1205 11:49:01.013618   10018 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6510,"bootTime":1733421631,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:49:01.013695   10018 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:49:01.020475   10018 out.go:177] * [pause-676000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:49:01.027487   10018 notify.go:220] Checking for updates...
	I1205 11:49:01.033565   10018 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:49:01.042495   10018 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:49:01.046461   10018 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:49:01.050449   10018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:49:01.051871   10018 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:49:01.054476   10018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:49:01.057795   10018 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:49:01.057864   10018 config.go:182] Loaded profile config "running-upgrade-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:49:01.057910   10018 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:49:01.062377   10018 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:49:01.069450   10018 start.go:297] selected driver: qemu2
	I1205 11:49:01.069453   10018 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:49:01.069459   10018 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:49:01.072036   10018 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:49:01.075661   10018 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:49:01.078617   10018 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:49:01.078630   10018 cni.go:84] Creating CNI manager for ""
	I1205 11:49:01.078648   10018 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:49:01.078650   10018 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:49:01.078673   10018 start.go:340] cluster config:
	{Name:pause-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:49:01.083417   10018 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:49:01.089383   10018 out.go:177] * Starting "pause-676000" primary control-plane node in "pause-676000" cluster
	I1205 11:49:01.093424   10018 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:49:01.093435   10018 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:49:01.093443   10018 cache.go:56] Caching tarball of preloaded images
	I1205 11:49:01.093501   10018 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:49:01.093504   10018 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:49:01.093574   10018 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/pause-676000/config.json ...
	I1205 11:49:01.093583   10018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/pause-676000/config.json: {Name:mkf105483d65f5f5f66569c0bd78ce96f29b4d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:49:01.093809   10018 start.go:360] acquireMachinesLock for pause-676000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:49:01.093848   10018 start.go:364] duration metric: took 36.208µs to acquireMachinesLock for "pause-676000"
	I1205 11:49:01.093857   10018 start.go:93] Provisioning new machine with config: &{Name:pause-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:pause-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:49:01.093889   10018 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:49:01.102439   10018 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:49:01.124136   10018 start.go:159] libmachine.API.Create for "pause-676000" (driver="qemu2")
	I1205 11:49:01.124163   10018 client.go:168] LocalClient.Create starting
	I1205 11:49:01.124240   10018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:49:01.124277   10018 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:01.124290   10018 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:01.124328   10018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:49:01.124356   10018 main.go:141] libmachine: Decoding PEM data...
	I1205 11:49:01.124363   10018 main.go:141] libmachine: Parsing certificate...
	I1205 11:49:01.124712   10018 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:49:01.326103   10018 main.go:141] libmachine: Creating SSH key...
	I1205 11:49:01.434562   10018 main.go:141] libmachine: Creating Disk image...
	I1205 11:49:01.434570   10018 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:49:01.435811   10018 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/disk.qcow2
	I1205 11:49:01.456276   10018 main.go:141] libmachine: STDOUT: 
	I1205 11:49:01.456301   10018 main.go:141] libmachine: STDERR: 
	I1205 11:49:01.456360   10018 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/disk.qcow2 +20000M
	I1205 11:49:01.465081   10018 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:49:01.465104   10018 main.go:141] libmachine: STDERR: 
	I1205 11:49:01.465117   10018 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/disk.qcow2
	I1205 11:49:01.465121   10018 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:49:01.465133   10018 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:49:01.465157   10018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:8b:1d:21:72:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/pause-676000/disk.qcow2
	I1205 11:49:01.469019   10018 main.go:141] libmachine: STDOUT: 
	I1205 11:49:01.469032   10018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:49:01.469054   10018 client.go:171] duration metric: took 344.888209ms to LocalClient.Create
	I1205 11:49:03.471247   10018 start.go:128] duration metric: took 2.377354125s to createHost
	I1205 11:49:03.471282   10018 start.go:83] releasing machines lock for "pause-676000", held for 2.37744925s
	W1205 11:49:03.471341   10018 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:03.481733   10018 out.go:177] * Deleting "pause-676000" in qemu2 ...
	W1205 11:49:03.511815   10018 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:49:03.511871   10018 start.go:729] Will try again in 5 seconds ...
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-12-05 19:39:23 UTC, ends at Thu 2024-12-05 19:49:08 UTC. --
	Dec 05 19:48:48 running-upgrade-842000 dockerd[4292]: time="2024-12-05T19:48:48.967480494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 05 19:48:48 running-upgrade-842000 dockerd[4292]: time="2024-12-05T19:48:48.967486619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 05 19:48:48 running-upgrade-842000 dockerd[4292]: time="2024-12-05T19:48:48.967533368Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/80f5eb33cafd530066c00d5065a4051862d2e74dcf0502346028cb42c2d39abf pid=17378 runtime=io.containerd.runc.v2
	Dec 05 19:48:49 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:49Z" level=error msg="ContainerStats resp: {0x40000b3d80 linux}"
	Dec 05 19:48:50 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:50Z" level=error msg="ContainerStats resp: {0x4000928cc0 linux}"
	Dec 05 19:48:50 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:50Z" level=error msg="ContainerStats resp: {0x4000929300 linux}"
	Dec 05 19:48:50 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:50Z" level=error msg="ContainerStats resp: {0x400039bac0 linux}"
	Dec 05 19:48:50 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:50Z" level=error msg="ContainerStats resp: {0x4000929d80 linux}"
	Dec 05 19:48:50 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:50Z" level=error msg="ContainerStats resp: {0x40004feac0 linux}"
	Dec 05 19:48:50 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:50Z" level=error msg="ContainerStats resp: {0x4000a3e300 linux}"
	Dec 05 19:48:50 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:50Z" level=error msg="ContainerStats resp: {0x4000a3e700 linux}"
	Dec 05 19:48:51 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 05 19:48:56 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:48:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 05 19:49:00 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:00Z" level=error msg="ContainerStats resp: {0x400065d680 linux}"
	Dec 05 19:49:00 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:00Z" level=error msg="ContainerStats resp: {0x400039a180 linux}"
	Dec 05 19:49:01 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 05 19:49:01 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:01Z" level=error msg="ContainerStats resp: {0x400073ac00 linux}"
	Dec 05 19:49:02 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:02Z" level=error msg="ContainerStats resp: {0x400092a940 linux}"
	Dec 05 19:49:02 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:02Z" level=error msg="ContainerStats resp: {0x400092ae00 linux}"
	Dec 05 19:49:02 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:02Z" level=error msg="ContainerStats resp: {0x400092b140 linux}"
	Dec 05 19:49:02 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:02Z" level=error msg="ContainerStats resp: {0x4000928540 linux}"
	Dec 05 19:49:02 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:02Z" level=error msg="ContainerStats resp: {0x400092b780 linux}"
	Dec 05 19:49:02 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:02Z" level=error msg="ContainerStats resp: {0x400092bd80 linux}"
	Dec 05 19:49:02 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:02Z" level=error msg="ContainerStats resp: {0x4000929340 linux}"
	Dec 05 19:49:06 running-upgrade-842000 cri-dockerd[4012]: time="2024-12-05T19:49:06Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	80f5eb33cafd5       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   519c622ba00bd
	3cc7feb10b65e       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   c434b9c1a0d21
	96252861cb2bb       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   519c622ba00bd
	daa0e911eff77       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   c434b9c1a0d21
	1b5d0c2d3f9c0       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   1264f2922d1c4
	950bf029735f0       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   c99f3c078ee33
	c62afbee00dd3       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   742b2f34a480a
	fa00c18980ef3       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   6613d3ee17563
	8f56a1b66d023       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   9c1722a9369c4
	ccf16edecae41       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   ec82691ea63d4
	
	
	==> coredns [3cc7feb10b65] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5913489793340124642.2187471522504444020. HINFO: read udp 10.244.0.2:46005->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5913489793340124642.2187471522504444020. HINFO: read udp 10.244.0.2:56563->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5913489793340124642.2187471522504444020. HINFO: read udp 10.244.0.2:45322->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5913489793340124642.2187471522504444020. HINFO: read udp 10.244.0.2:59381->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5913489793340124642.2187471522504444020. HINFO: read udp 10.244.0.2:54725->10.0.2.3:53: i/o timeout
	
	
	==> coredns [80f5eb33cafd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3885046229895998518.6680079576759975076. HINFO: read udp 10.244.0.3:46364->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3885046229895998518.6680079576759975076. HINFO: read udp 10.244.0.3:44823->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3885046229895998518.6680079576759975076. HINFO: read udp 10.244.0.3:58170->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3885046229895998518.6680079576759975076. HINFO: read udp 10.244.0.3:53514->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3885046229895998518.6680079576759975076. HINFO: read udp 10.244.0.3:51664->10.0.2.3:53: i/o timeout
	
	
	==> coredns [96252861cb2b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9175118565981230481.8522176273341947556. HINFO: read udp 10.244.0.3:53404->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9175118565981230481.8522176273341947556. HINFO: read udp 10.244.0.3:36333->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9175118565981230481.8522176273341947556. HINFO: read udp 10.244.0.3:42175->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9175118565981230481.8522176273341947556. HINFO: read udp 10.244.0.3:36884->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9175118565981230481.8522176273341947556. HINFO: read udp 10.244.0.3:54346->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [daa0e911eff7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:56695->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:51647->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:35312->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:42706->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:48991->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:57113->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:48871->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:38005->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:59173->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2578087807472827112.3227508699367787297. HINFO: read udp 10.244.0.2:46743->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-842000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-842000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=running-upgrade-842000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T11_44_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:44:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-842000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:49:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:44:47 +0000   Thu, 05 Dec 2024 19:44:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:44:47 +0000   Thu, 05 Dec 2024 19:44:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:44:47 +0000   Thu, 05 Dec 2024 19:44:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:44:47 +0000   Thu, 05 Dec 2024 19:44:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-842000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 2968bb0ae5c0456f8769b9672627a0a0
	  System UUID:                2968bb0ae5c0456f8769b9672627a0a0
	  Boot ID:                    48557b4a-cb48-444d-9cd4-955748339fc6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-t684g                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 coredns-6d4b75cb6d-vknmv                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 etcd-running-upgrade-842000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-842000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-842000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-8n478                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-842000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-842000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-842000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-842000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-842000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-842000 event: Registered Node running-upgrade-842000 in Controller
	
	
	==> dmesg <==
	[  +0.077151] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.083961] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.137954] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091018] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.082334] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.286909] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[  +8.639144] systemd-fstab-generator[1922]: Ignoring "noauto" for root device
	[Dec 5 19:40] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.536226] systemd-fstab-generator[2490]: Ignoring "noauto" for root device
	[  +0.205981] systemd-fstab-generator[2573]: Ignoring "noauto" for root device
	[  +0.128610] systemd-fstab-generator[2668]: Ignoring "noauto" for root device
	[  +0.132458] systemd-fstab-generator[2693]: Ignoring "noauto" for root device
	[  +7.492877] kauditd_printk_skb: 10 callbacks suppressed
	[ +14.078703] systemd-fstab-generator[3968]: Ignoring "noauto" for root device
	[  +0.080848] systemd-fstab-generator[3980]: Ignoring "noauto" for root device
	[  +0.082196] systemd-fstab-generator[3991]: Ignoring "noauto" for root device
	[  +0.099606] systemd-fstab-generator[4005]: Ignoring "noauto" for root device
	[  +2.445832] systemd-fstab-generator[4279]: Ignoring "noauto" for root device
	[  +2.373044] systemd-fstab-generator[4602]: Ignoring "noauto" for root device
	[  +1.445427] systemd-fstab-generator[4746]: Ignoring "noauto" for root device
	[  +1.808140] kauditd_printk_skb: 80 callbacks suppressed
	[ +15.792211] kauditd_printk_skb: 3 callbacks suppressed
	[Dec 5 19:44] systemd-fstab-generator[11795]: Ignoring "noauto" for root device
	[  +5.624740] systemd-fstab-generator[12404]: Ignoring "noauto" for root device
	[  +0.479650] systemd-fstab-generator[12540]: Ignoring "noauto" for root device
	
	
	==> etcd [c62afbee00dd] <==
	{"level":"info","ts":"2024-12-05T19:44:42.871Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T19:44:42.871Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T19:44:42.865Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-05T19:44:42.871Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-05T19:44:42.865Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-12-05T19:44:42.865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-12-05T19:44:42.871Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-842000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T19:44:43.838Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T19:44:43.840Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T19:44:43.840Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T19:44:43.840Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-12-05T19:44:43.841Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T19:44:43.841Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T19:44:43.841Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T19:44:43.841Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T19:44:43.841Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:49:08 up 9 min,  0 users,  load average: 0.38, 0.30, 0.15
	Linux running-upgrade-842000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8f56a1b66d02] <==
	I1205 19:44:44.989931       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1205 19:44:44.991691       1 controller.go:611] quota admission added evaluator for: namespaces
	I1205 19:44:45.055598       1 cache.go:39] Caches are synced for autoregister controller
	I1205 19:44:45.057515       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 19:44:45.057627       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 19:44:45.057723       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1205 19:44:45.057865       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1205 19:44:45.801484       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1205 19:44:45.957422       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 19:44:45.958681       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 19:44:45.958687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 19:44:46.078381       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 19:44:46.097090       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 19:44:46.196802       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1205 19:44:46.198861       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1205 19:44:46.199217       1 controller.go:611] quota admission added evaluator for: endpoints
	I1205 19:44:46.200366       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:44:47.088512       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1205 19:44:47.420476       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1205 19:44:47.423864       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1205 19:44:47.432740       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1205 19:44:47.494876       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 19:45:00.242795       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1205 19:45:00.891828       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:45:02.257225       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [ccf16edecae4] <==
	I1205 19:45:00.247663       1 range_allocator.go:173] Starting range CIDR allocator
	I1205 19:45:00.247673       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1205 19:45:00.247682       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1205 19:45:00.247802       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1205 19:45:00.250973       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1205 19:45:00.251563       1 range_allocator.go:374] Set node running-upgrade-842000 PodCIDR to [10.244.0.0/24]
	I1205 19:45:00.263140       1 shared_informer.go:262] Caches are synced for daemon sets
	I1205 19:45:00.267475       1 shared_informer.go:262] Caches are synced for attach detach
	I1205 19:45:00.269677       1 shared_informer.go:262] Caches are synced for ephemeral
	I1205 19:45:00.288949       1 shared_informer.go:262] Caches are synced for persistent volume
	I1205 19:45:00.308031       1 shared_informer.go:262] Caches are synced for HPA
	I1205 19:45:00.337771       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1205 19:45:00.366242       1 shared_informer.go:262] Caches are synced for job
	I1205 19:45:00.419757       1 shared_informer.go:262] Caches are synced for cronjob
	I1205 19:45:00.438769       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1205 19:45:00.459745       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 19:45:00.472833       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 19:45:00.489920       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1205 19:45:00.793027       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-t684g"
	I1205 19:45:00.798737       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vknmv"
	I1205 19:45:00.884952       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 19:45:00.892848       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 19:45:00.892860       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1205 19:45:00.896555       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8n478"
	W1205 19:45:00.942783       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	
	
	==> kube-proxy [1b5d0c2d3f9c] <==
	I1205 19:45:02.243845       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1205 19:45:02.243871       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1205 19:45:02.243882       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1205 19:45:02.255233       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1205 19:45:02.255243       1 server_others.go:206] "Using iptables Proxier"
	I1205 19:45:02.255312       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1205 19:45:02.255460       1 server.go:661] "Version info" version="v1.24.1"
	I1205 19:45:02.255468       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:45:02.255772       1 config.go:317] "Starting service config controller"
	I1205 19:45:02.255782       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1205 19:45:02.255790       1 config.go:226] "Starting endpoint slice config controller"
	I1205 19:45:02.255819       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1205 19:45:02.256091       1 config.go:444] "Starting node config controller"
	I1205 19:45:02.256113       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1205 19:45:02.357435       1 shared_informer.go:262] Caches are synced for service config
	I1205 19:45:02.357451       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1205 19:45:02.357435       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [fa00c18980ef] <==
	W1205 19:44:44.994302       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:44:44.994330       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 19:44:44.994376       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:44:44.994397       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:44:44.994482       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:44:44.994504       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 19:44:44.994546       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:44:44.994567       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:44:44.994822       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:44:44.994845       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 19:44:44.994877       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:44:44.994898       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 19:44:44.994951       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:44:44.994977       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 19:44:45.895618       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:44:45.895637       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 19:44:45.912738       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:44:45.912760       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 19:44:45.928292       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:44:45.928304       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:44:45.981189       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:44:45.981240       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 19:44:45.983917       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:44:45.983928       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1205 19:44:47.992151       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-12-05 19:39:23 UTC, ends at Thu 2024-12-05 19:49:08 UTC. --
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.292664   12410 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.393036   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/374f9c5e-327c-45f1-9271-503a4e0facf1-tmp\") pod \"storage-provisioner\" (UID: \"374f9c5e-327c-45f1-9271-503a4e0facf1\") " pod="kube-system/storage-provisioner"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.393064   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbq86\" (UniqueName: \"kubernetes.io/projected/374f9c5e-327c-45f1-9271-503a4e0facf1-kube-api-access-vbq86\") pod \"storage-provisioner\" (UID: \"374f9c5e-327c-45f1-9271-503a4e0facf1\") " pod="kube-system/storage-provisioner"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.497755   12410 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.497778   12410 projected.go:192] Error preparing data for projected volume kube-api-access-vbq86 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.497815   12410 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/374f9c5e-327c-45f1-9271-503a4e0facf1-kube-api-access-vbq86 podName:374f9c5e-327c-45f1-9271-503a4e0facf1 nodeName:}" failed. No retries permitted until 2024-12-05 19:45:00.99780119 +0000 UTC m=+13.587895017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vbq86" (UniqueName: "kubernetes.io/projected/374f9c5e-327c-45f1-9271-503a4e0facf1-kube-api-access-vbq86") pod "storage-provisioner" (UID: "374f9c5e-327c-45f1-9271-503a4e0facf1") : configmap "kube-root-ca.crt" not found
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.794456   12410 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: W1205 19:45:00.797101   12410 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: E1205 19:45:00.797133   12410 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-842000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-842000' and this object
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.801091   12410 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.898315   12410 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.995511   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a0a9e99-c085-42dc-a526-2d2642a76cc4-config-volume\") pod \"coredns-6d4b75cb6d-t684g\" (UID: \"6a0a9e99-c085-42dc-a526-2d2642a76cc4\") " pod="kube-system/coredns-6d4b75cb6d-t684g"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.995600   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da35ae22-ab98-4a94-9bf9-678a32d9275a-config-volume\") pod \"coredns-6d4b75cb6d-vknmv\" (UID: \"da35ae22-ab98-4a94-9bf9-678a32d9275a\") " pod="kube-system/coredns-6d4b75cb6d-vknmv"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.995646   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s87j\" (UniqueName: \"kubernetes.io/projected/da35ae22-ab98-4a94-9bf9-678a32d9275a-kube-api-access-6s87j\") pod \"coredns-6d4b75cb6d-vknmv\" (UID: \"da35ae22-ab98-4a94-9bf9-678a32d9275a\") " pod="kube-system/coredns-6d4b75cb6d-vknmv"
	Dec 05 19:45:00 running-upgrade-842000 kubelet[12410]: I1205 19:45:00.995676   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25bfx\" (UniqueName: \"kubernetes.io/projected/6a0a9e99-c085-42dc-a526-2d2642a76cc4-kube-api-access-25bfx\") pod \"coredns-6d4b75cb6d-t684g\" (UID: \"6a0a9e99-c085-42dc-a526-2d2642a76cc4\") " pod="kube-system/coredns-6d4b75cb6d-t684g"
	Dec 05 19:45:01 running-upgrade-842000 kubelet[12410]: I1205 19:45:01.096377   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15d2d6f9-78b3-41c3-aa9e-f67ce3e8e432-xtables-lock\") pod \"kube-proxy-8n478\" (UID: \"15d2d6f9-78b3-41c3-aa9e-f67ce3e8e432\") " pod="kube-system/kube-proxy-8n478"
	Dec 05 19:45:01 running-upgrade-842000 kubelet[12410]: I1205 19:45:01.096405   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6tjg\" (UniqueName: \"kubernetes.io/projected/15d2d6f9-78b3-41c3-aa9e-f67ce3e8e432-kube-api-access-t6tjg\") pod \"kube-proxy-8n478\" (UID: \"15d2d6f9-78b3-41c3-aa9e-f67ce3e8e432\") " pod="kube-system/kube-proxy-8n478"
	Dec 05 19:45:01 running-upgrade-842000 kubelet[12410]: I1205 19:45:01.096431   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/15d2d6f9-78b3-41c3-aa9e-f67ce3e8e432-kube-proxy\") pod \"kube-proxy-8n478\" (UID: \"15d2d6f9-78b3-41c3-aa9e-f67ce3e8e432\") " pod="kube-system/kube-proxy-8n478"
	Dec 05 19:45:01 running-upgrade-842000 kubelet[12410]: I1205 19:45:01.096444   12410 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15d2d6f9-78b3-41c3-aa9e-f67ce3e8e432-lib-modules\") pod \"kube-proxy-8n478\" (UID: \"15d2d6f9-78b3-41c3-aa9e-f67ce3e8e432\") " pod="kube-system/kube-proxy-8n478"
	Dec 05 19:45:02 running-upgrade-842000 kubelet[12410]: E1205 19:45:02.096624   12410 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 05 19:45:02 running-upgrade-842000 kubelet[12410]: E1205 19:45:02.096867   12410 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/da35ae22-ab98-4a94-9bf9-678a32d9275a-config-volume podName:da35ae22-ab98-4a94-9bf9-678a32d9275a nodeName:}" failed. No retries permitted until 2024-12-05 19:45:02.596857027 +0000 UTC m=+15.186950853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/da35ae22-ab98-4a94-9bf9-678a32d9275a-config-volume") pod "coredns-6d4b75cb6d-vknmv" (UID: "da35ae22-ab98-4a94-9bf9-678a32d9275a") : failed to sync configmap cache: timed out waiting for the condition
	Dec 05 19:45:02 running-upgrade-842000 kubelet[12410]: E1205 19:45:02.096638   12410 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 05 19:45:02 running-upgrade-842000 kubelet[12410]: E1205 19:45:02.096952   12410 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/6a0a9e99-c085-42dc-a526-2d2642a76cc4-config-volume podName:6a0a9e99-c085-42dc-a526-2d2642a76cc4 nodeName:}" failed. No retries permitted until 2024-12-05 19:45:02.596944608 +0000 UTC m=+15.187038435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6a0a9e99-c085-42dc-a526-2d2642a76cc4-config-volume") pod "coredns-6d4b75cb6d-t684g" (UID: "6a0a9e99-c085-42dc-a526-2d2642a76cc4") : failed to sync configmap cache: timed out waiting for the condition
	Dec 05 19:48:49 running-upgrade-842000 kubelet[12410]: I1205 19:48:49.780092   12410 scope.go:110] "RemoveContainer" containerID="63e6e33719d5a9f468d13d4f2515cec894dfee317993347fb38f8b7d692b9e23"
	Dec 05 19:48:49 running-upgrade-842000 kubelet[12410]: I1205 19:48:49.797075   12410 scope.go:110] "RemoveContainer" containerID="b8e3f20adc7f9bde3aa32e4fe663ead6f20c4698d0e378fa49a6888b6c8934ee"
	
	
	==> storage-provisioner [950bf029735f] <==
	I1205 19:45:01.357594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:45:01.362439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:45:01.362453       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:45:01.365927       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:45:01.365959       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e5b0a6e-3bcd-4207-99b3-71cfca7b1745", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-842000_8fc7741c-641d-4e36-8d2f-efd52e97e0ff became leader
	I1205 19:45:01.366090       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-842000_8fc7741c-641d-4e36-8d2f-efd52e97e0ff!
	I1205 19:45:01.466342       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-842000_8fc7741c-641d-4e36-8d2f-efd52e97e0ff!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-842000 -n running-upgrade-842000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-842000 -n running-upgrade-842000: exit status 2 (15.622547375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-842000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-842000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-842000
--- FAIL: TestRunningBinaryUpgrade (640.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-703000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-703000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.959893125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-703000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-703000" primary control-plane node in "kubernetes-upgrade-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:38:25.924305    9718 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:38:25.924452    9718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:38:25.924456    9718 out.go:358] Setting ErrFile to fd 2...
	I1205 11:38:25.924458    9718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:38:25.924608    9718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:38:25.925727    9718 out.go:352] Setting JSON to false
	I1205 11:38:25.943415    9718 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5874,"bootTime":1733421631,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:38:25.943491    9718 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:38:25.948903    9718 out.go:177] * [kubernetes-upgrade-703000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:38:25.960171    9718 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:38:25.960216    9718 notify.go:220] Checking for updates...
	I1205 11:38:25.967937    9718 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:38:25.970882    9718 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:38:25.973982    9718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:38:25.976973    9718 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:38:25.979927    9718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:38:25.983343    9718 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:38:25.983423    9718 config.go:182] Loaded profile config "offline-docker-856000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:38:25.983476    9718 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:38:25.987913    9718 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:38:25.994909    9718 start.go:297] selected driver: qemu2
	I1205 11:38:25.994916    9718 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:38:25.994922    9718 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:38:25.997471    9718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:38:26.000914    9718 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:38:26.004058    9718 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:38:26.004083    9718 cni.go:84] Creating CNI manager for ""
	I1205 11:38:26.004107    9718 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 11:38:26.004140    9718 start.go:340] cluster config:
	{Name:kubernetes-upgrade-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:38:26.009216    9718 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:38:26.015936    9718 out.go:177] * Starting "kubernetes-upgrade-703000" primary control-plane node in "kubernetes-upgrade-703000" cluster
	I1205 11:38:26.019902    9718 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:38:26.019929    9718 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:38:26.019944    9718 cache.go:56] Caching tarball of preloaded images
	I1205 11:38:26.020039    9718 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:38:26.020045    9718 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 11:38:26.020122    9718 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/kubernetes-upgrade-703000/config.json ...
	I1205 11:38:26.020135    9718 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/kubernetes-upgrade-703000/config.json: {Name:mk5bc9c0f989202f1fa007b4f27101eed11dc5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:38:26.020409    9718 start.go:360] acquireMachinesLock for kubernetes-upgrade-703000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:38:26.020466    9718 start.go:364] duration metric: took 46.208µs to acquireMachinesLock for "kubernetes-upgrade-703000"
	I1205 11:38:26.020479    9718 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:38:26.020504    9718 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:38:26.023991    9718 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:38:26.042712    9718 start.go:159] libmachine.API.Create for "kubernetes-upgrade-703000" (driver="qemu2")
	I1205 11:38:26.042738    9718 client.go:168] LocalClient.Create starting
	I1205 11:38:26.042817    9718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:38:26.042858    9718 main.go:141] libmachine: Decoding PEM data...
	I1205 11:38:26.042870    9718 main.go:141] libmachine: Parsing certificate...
	I1205 11:38:26.042909    9718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:38:26.042941    9718 main.go:141] libmachine: Decoding PEM data...
	I1205 11:38:26.042955    9718 main.go:141] libmachine: Parsing certificate...
	I1205 11:38:26.043367    9718 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:38:26.204439    9718 main.go:141] libmachine: Creating SSH key...
	I1205 11:38:26.394489    9718 main.go:141] libmachine: Creating Disk image...
	I1205 11:38:26.394496    9718 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:38:26.394732    9718 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2
	I1205 11:38:26.405346    9718 main.go:141] libmachine: STDOUT: 
	I1205 11:38:26.405367    9718 main.go:141] libmachine: STDERR: 
	I1205 11:38:26.405422    9718 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2 +20000M
	I1205 11:38:26.414005    9718 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:38:26.414020    9718 main.go:141] libmachine: STDERR: 
	I1205 11:38:26.414042    9718 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2
	I1205 11:38:26.414050    9718 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:38:26.414064    9718 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:38:26.414096    9718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d1:63:28:e8:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2
	I1205 11:38:26.415903    9718 main.go:141] libmachine: STDOUT: 
	I1205 11:38:26.415914    9718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:38:26.415932    9718 client.go:171] duration metric: took 373.192333ms to LocalClient.Create
	I1205 11:38:28.418201    9718 start.go:128] duration metric: took 2.397644458s to createHost
	I1205 11:38:28.418266    9718 start.go:83] releasing machines lock for "kubernetes-upgrade-703000", held for 2.397811125s
	W1205 11:38:28.418310    9718 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:28.431536    9718 out.go:177] * Deleting "kubernetes-upgrade-703000" in qemu2 ...
	W1205 11:38:28.459805    9718 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:28.459835    9718 start.go:729] Will try again in 5 seconds ...
	I1205 11:38:33.461863    9718 start.go:360] acquireMachinesLock for kubernetes-upgrade-703000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:38:33.461929    9718 start.go:364] duration metric: took 47µs to acquireMachinesLock for "kubernetes-upgrade-703000"
	I1205 11:38:33.461943    9718 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:38:33.461998    9718 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:38:33.468767    9718 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:38:33.485703    9718 start.go:159] libmachine.API.Create for "kubernetes-upgrade-703000" (driver="qemu2")
	I1205 11:38:33.485737    9718 client.go:168] LocalClient.Create starting
	I1205 11:38:33.485787    9718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:38:33.485813    9718 main.go:141] libmachine: Decoding PEM data...
	I1205 11:38:33.485822    9718 main.go:141] libmachine: Parsing certificate...
	I1205 11:38:33.485856    9718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:38:33.485873    9718 main.go:141] libmachine: Decoding PEM data...
	I1205 11:38:33.485879    9718 main.go:141] libmachine: Parsing certificate...
	I1205 11:38:33.486162    9718 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:38:33.659757    9718 main.go:141] libmachine: Creating SSH key...
	I1205 11:38:33.791272    9718 main.go:141] libmachine: Creating Disk image...
	I1205 11:38:33.791280    9718 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:38:33.791456    9718 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2
	I1205 11:38:33.801740    9718 main.go:141] libmachine: STDOUT: 
	I1205 11:38:33.801769    9718 main.go:141] libmachine: STDERR: 
	I1205 11:38:33.801832    9718 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2 +20000M
	I1205 11:38:33.810889    9718 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:38:33.810911    9718 main.go:141] libmachine: STDERR: 
	I1205 11:38:33.810925    9718 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2
	I1205 11:38:33.810931    9718 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:38:33.810940    9718 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:38:33.810989    9718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9b:cd:de:d3:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2
	I1205 11:38:33.812945    9718 main.go:141] libmachine: STDOUT: 
	I1205 11:38:33.812960    9718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:38:33.812973    9718 client.go:171] duration metric: took 327.234666ms to LocalClient.Create
	I1205 11:38:35.815190    9718 start.go:128] duration metric: took 2.353179375s to createHost
	I1205 11:38:35.815268    9718 start.go:83] releasing machines lock for "kubernetes-upgrade-703000", held for 2.353348625s
	W1205 11:38:35.815691    9718 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:35.824327    9718 out.go:201] 
	W1205 11:38:35.828330    9718 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:38:35.828354    9718 out.go:270] * 
	* 
	W1205 11:38:35.831563    9718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:38:35.840287    9718 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-703000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-703000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-703000: (3.099219167s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-703000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-703000 status --format={{.Host}}: exit status 7 (69.731625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-703000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-703000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.219950041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-703000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-703000" primary control-plane node in "kubernetes-upgrade-703000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:38:39.058688    9768 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:38:39.058841    9768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:38:39.058845    9768 out.go:358] Setting ErrFile to fd 2...
	I1205 11:38:39.058847    9768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:38:39.058985    9768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:38:39.060132    9768 out.go:352] Setting JSON to false
	I1205 11:38:39.078052    9768 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5888,"bootTime":1733421631,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:38:39.078135    9768 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:38:39.083129    9768 out.go:177] * [kubernetes-upgrade-703000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:38:39.091073    9768 notify.go:220] Checking for updates...
	I1205 11:38:39.095075    9768 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:38:39.099000    9768 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:38:39.106072    9768 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:38:39.114022    9768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:38:39.124061    9768 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:38:39.132058    9768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:38:39.136310    9768 config.go:182] Loaded profile config "kubernetes-upgrade-703000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1205 11:38:39.136577    9768 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:38:39.141059    9768 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:38:39.146054    9768 start.go:297] selected driver: qemu2
	I1205 11:38:39.146060    9768 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:38:39.146118    9768 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:38:39.148794    9768 cni.go:84] Creating CNI manager for ""
	I1205 11:38:39.148828    9768 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:38:39.148855    9768 start.go:340] cluster config:
	{Name:kubernetes-upgrade-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-703000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:38:39.153312    9768 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:38:39.160019    9768 out.go:177] * Starting "kubernetes-upgrade-703000" primary control-plane node in "kubernetes-upgrade-703000" cluster
	I1205 11:38:39.164061    9768 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:38:39.164075    9768 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:38:39.164084    9768 cache.go:56] Caching tarball of preloaded images
	I1205 11:38:39.164158    9768 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:38:39.164164    9768 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:38:39.164221    9768 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/kubernetes-upgrade-703000/config.json ...
	I1205 11:38:39.164550    9768 start.go:360] acquireMachinesLock for kubernetes-upgrade-703000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:38:39.164596    9768 start.go:364] duration metric: took 40.292µs to acquireMachinesLock for "kubernetes-upgrade-703000"
	I1205 11:38:39.164605    9768 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:38:39.164610    9768 fix.go:54] fixHost starting: 
	I1205 11:38:39.164727    9768 fix.go:112] recreateIfNeeded on kubernetes-upgrade-703000: state=Stopped err=<nil>
	W1205 11:38:39.164736    9768 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:38:39.173077    9768 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-703000" ...
	I1205 11:38:39.176050    9768 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:38:39.176091    9768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9b:cd:de:d3:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2
	I1205 11:38:39.178162    9768 main.go:141] libmachine: STDOUT: 
	I1205 11:38:39.178182    9768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:38:39.178210    9768 fix.go:56] duration metric: took 13.599458ms for fixHost
	I1205 11:38:39.178213    9768 start.go:83] releasing machines lock for "kubernetes-upgrade-703000", held for 13.612916ms
	W1205 11:38:39.178219    9768 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:38:39.178263    9768 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:39.178267    9768 start.go:729] Will try again in 5 seconds ...
	I1205 11:38:44.180127    9768 start.go:360] acquireMachinesLock for kubernetes-upgrade-703000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:38:44.180566    9768 start.go:364] duration metric: took 347.583µs to acquireMachinesLock for "kubernetes-upgrade-703000"
	I1205 11:38:44.180691    9768 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:38:44.180712    9768 fix.go:54] fixHost starting: 
	I1205 11:38:44.181447    9768 fix.go:112] recreateIfNeeded on kubernetes-upgrade-703000: state=Stopped err=<nil>
	W1205 11:38:44.181474    9768 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:38:44.191137    9768 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-703000" ...
	I1205 11:38:44.196051    9768 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:38:44.196239    9768 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9b:cd:de:d3:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubernetes-upgrade-703000/disk.qcow2
	I1205 11:38:44.207851    9768 main.go:141] libmachine: STDOUT: 
	I1205 11:38:44.207909    9768 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:38:44.207989    9768 fix.go:56] duration metric: took 27.279833ms for fixHost
	I1205 11:38:44.208007    9768 start.go:83] releasing machines lock for "kubernetes-upgrade-703000", held for 27.419417ms
	W1205 11:38:44.208300    9768 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-703000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-703000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:38:44.217102    9768 out.go:201] 
	W1205 11:38:44.222230    9768 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:38:44.222289    9768 out.go:270] * 
	* 
	W1205 11:38:44.224780    9768 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:38:44.233120    9768 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-703000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-703000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-703000 version --output=json: exit status 1 (64.625542ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-703000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-05 11:38:44.311942 -0800 PST m=+684.285506460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-703000 -n kubernetes-upgrade-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-703000 -n kubernetes-upgrade-703000: exit status 7 (37.505417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-703000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-703000
--- FAIL: TestKubernetesUpgrade (18.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (592.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3158904263 start -p stopped-upgrade-050000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3158904263 start -p stopped-upgrade-050000 --memory=2200 --vm-driver=qemu2 : (54.454949125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3158904263 -p stopped-upgrade-050000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3158904263 -p stopped-upgrade-050000 stop: (12.109763417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-050000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-050000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m45.479280083s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-050000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-050000" primary control-plane node in "stopped-upgrade-050000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-050000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:39:41.339519    9807 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:39:41.339954    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:39:41.339957    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:39:41.339959    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:39:41.340288    9807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:39:41.341770    9807 out.go:352] Setting JSON to false
	I1205 11:39:41.362367    9807 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5950,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:39:41.362456    9807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:39:41.368052    9807 out.go:177] * [stopped-upgrade-050000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:39:41.375178    9807 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:39:41.375312    9807 notify.go:220] Checking for updates...
	I1205 11:39:41.381812    9807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:39:41.384827    9807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:39:41.387834    9807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:39:41.389093    9807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:39:41.391846    9807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:39:41.395152    9807 config.go:182] Loaded profile config "stopped-upgrade-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:39:41.398808    9807 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 11:39:41.402132    9807 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:39:41.406864    9807 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:39:41.413819    9807 start.go:297] selected driver: qemu2
	I1205 11:39:41.413835    9807 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:39:41.413880    9807 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:39:41.417010    9807 cni.go:84] Creating CNI manager for ""
	I1205 11:39:41.417049    9807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:39:41.417208    9807 start.go:340] cluster config:
	{Name:stopped-upgrade-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:39:41.417419    9807 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:39:41.423837    9807 out.go:177] * Starting "stopped-upgrade-050000" primary control-plane node in "stopped-upgrade-050000" cluster
	I1205 11:39:41.427799    9807 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:39:41.427812    9807 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1205 11:39:41.427847    9807 cache.go:56] Caching tarball of preloaded images
	I1205 11:39:41.427920    9807 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:39:41.427925    9807 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1205 11:39:41.427979    9807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/config.json ...
	I1205 11:39:41.428437    9807 start.go:360] acquireMachinesLock for stopped-upgrade-050000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:39:41.428471    9807 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "stopped-upgrade-050000"
	I1205 11:39:41.428480    9807 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:39:41.428492    9807 fix.go:54] fixHost starting: 
	I1205 11:39:41.428621    9807 fix.go:112] recreateIfNeeded on stopped-upgrade-050000: state=Stopped err=<nil>
	W1205 11:39:41.428629    9807 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:39:41.431757    9807 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-050000" ...
	I1205 11:39:41.439957    9807 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:39:41.440060    9807 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/qemu.pid -nic user,model=virtio,hostfwd=tcp::56452-:22,hostfwd=tcp::56453-:2376,hostname=stopped-upgrade-050000 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/disk.qcow2
	I1205 11:39:41.484889    9807 main.go:141] libmachine: STDOUT: 
	I1205 11:39:41.484929    9807 main.go:141] libmachine: STDERR: 
	I1205 11:39:41.484942    9807 main.go:141] libmachine: Waiting for VM to start (ssh -p 56452 docker@127.0.0.1)...
	I1205 11:40:01.762838    9807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/config.json ...
	I1205 11:40:01.763093    9807 machine.go:93] provisionDockerMachine start ...
	I1205 11:40:01.763162    9807 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:01.763353    9807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a32fc0] 0x100a35800 <nil>  [] 0s} localhost 56452 <nil> <nil>}
	I1205 11:40:01.763360    9807 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 11:40:01.822089    9807 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 11:40:01.822144    9807 buildroot.go:166] provisioning hostname "stopped-upgrade-050000"
	I1205 11:40:01.822240    9807 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:01.822362    9807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a32fc0] 0x100a35800 <nil>  [] 0s} localhost 56452 <nil> <nil>}
	I1205 11:40:01.822367    9807 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-050000 && echo "stopped-upgrade-050000" | sudo tee /etc/hostname
	I1205 11:40:01.882268    9807 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-050000
	
	I1205 11:40:01.882336    9807 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:01.882455    9807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a32fc0] 0x100a35800 <nil>  [] 0s} localhost 56452 <nil> <nil>}
	I1205 11:40:01.882464    9807 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-050000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-050000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-050000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 11:40:01.944288    9807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 11:40:01.944304    9807 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20053-7409/.minikube CaCertPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20053-7409/.minikube}
	I1205 11:40:01.944329    9807 buildroot.go:174] setting up certificates
	I1205 11:40:01.944334    9807 provision.go:84] configureAuth start
	I1205 11:40:01.944338    9807 provision.go:143] copyHostCerts
	I1205 11:40:01.944440    9807 exec_runner.go:144] found /Users/jenkins/minikube-integration/20053-7409/.minikube/cert.pem, removing ...
	I1205 11:40:01.944458    9807 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20053-7409/.minikube/cert.pem
	I1205 11:40:01.944577    9807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20053-7409/.minikube/cert.pem (1123 bytes)
	I1205 11:40:01.944786    9807 exec_runner.go:144] found /Users/jenkins/minikube-integration/20053-7409/.minikube/key.pem, removing ...
	I1205 11:40:01.944789    9807 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20053-7409/.minikube/key.pem
	I1205 11:40:01.944841    9807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20053-7409/.minikube/key.pem (1679 bytes)
	I1205 11:40:01.944959    9807 exec_runner.go:144] found /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.pem, removing ...
	I1205 11:40:01.944962    9807 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.pem
	I1205 11:40:01.945013    9807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.pem (1078 bytes)
	I1205 11:40:01.945112    9807 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-050000 san=[127.0.0.1 localhost minikube stopped-upgrade-050000]
	I1205 11:40:02.135840    9807 provision.go:177] copyRemoteCerts
	I1205 11:40:02.135929    9807 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 11:40:02.135938    9807 sshutil.go:53] new ssh client: &{IP:localhost Port:56452 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/id_rsa Username:docker}
	I1205 11:40:02.166095    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 11:40:02.173195    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 11:40:02.180633    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 11:40:02.188035    9807 provision.go:87] duration metric: took 243.681ms to configureAuth
	I1205 11:40:02.188043    9807 buildroot.go:189] setting minikube options for container-runtime
	I1205 11:40:02.188149    9807 config.go:182] Loaded profile config "stopped-upgrade-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:40:02.188197    9807 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:02.188289    9807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a32fc0] 0x100a35800 <nil>  [] 0s} localhost 56452 <nil> <nil>}
	I1205 11:40:02.188294    9807 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 11:40:02.246506    9807 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1205 11:40:02.246517    9807 buildroot.go:70] root file system type: tmpfs
	I1205 11:40:02.246575    9807 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 11:40:02.246645    9807 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:02.246766    9807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a32fc0] 0x100a35800 <nil>  [] 0s} localhost 56452 <nil> <nil>}
	I1205 11:40:02.246804    9807 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 11:40:02.309910    9807 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 11:40:02.309987    9807 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:02.310105    9807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a32fc0] 0x100a35800 <nil>  [] 0s} localhost 56452 <nil> <nil>}
	I1205 11:40:02.310115    9807 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 11:40:02.674649    9807 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1205 11:40:02.674663    9807 machine.go:96] duration metric: took 911.572833ms to provisionDockerMachine
	I1205 11:40:02.674670    9807 start.go:293] postStartSetup for "stopped-upgrade-050000" (driver="qemu2")
	I1205 11:40:02.674676    9807 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 11:40:02.674743    9807 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 11:40:02.674753    9807 sshutil.go:53] new ssh client: &{IP:localhost Port:56452 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/id_rsa Username:docker}
	I1205 11:40:02.704143    9807 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 11:40:02.705549    9807 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 11:40:02.705560    9807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20053-7409/.minikube/addons for local assets ...
	I1205 11:40:02.705654    9807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20053-7409/.minikube/files for local assets ...
	I1205 11:40:02.705790    9807 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/ssl/certs/79222.pem -> 79222.pem in /etc/ssl/certs
	I1205 11:40:02.705958    9807 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 11:40:02.708940    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/ssl/certs/79222.pem --> /etc/ssl/certs/79222.pem (1708 bytes)
	I1205 11:40:02.715856    9807 start.go:296] duration metric: took 41.181458ms for postStartSetup
	I1205 11:40:02.715868    9807 fix.go:56] duration metric: took 21.287578416s for fixHost
	I1205 11:40:02.715912    9807 main.go:141] libmachine: Using SSH client type: native
	I1205 11:40:02.716020    9807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a32fc0] 0x100a35800 <nil>  [] 0s} localhost 56452 <nil> <nil>}
	I1205 11:40:02.716024    9807 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 11:40:02.773663    9807 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733427602.556967754
	
	I1205 11:40:02.773670    9807 fix.go:216] guest clock: 1733427602.556967754
	I1205 11:40:02.773673    9807 fix.go:229] Guest: 2024-12-05 11:40:02.556967754 -0800 PST Remote: 2024-12-05 11:40:02.71587 -0800 PST m=+21.487760584 (delta=-158.902246ms)
	I1205 11:40:02.773685    9807 fix.go:200] guest clock delta is within tolerance: -158.902246ms
	I1205 11:40:02.773688    9807 start.go:83] releasing machines lock for "stopped-upgrade-050000", held for 21.345404916s
	I1205 11:40:02.773760    9807 ssh_runner.go:195] Run: cat /version.json
	I1205 11:40:02.773773    9807 sshutil.go:53] new ssh client: &{IP:localhost Port:56452 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/id_rsa Username:docker}
	I1205 11:40:02.773785    9807 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 11:40:02.773819    9807 sshutil.go:53] new ssh client: &{IP:localhost Port:56452 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/id_rsa Username:docker}
	W1205 11:40:02.774403    9807 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:56686->127.0.0.1:56452: read: connection reset by peer
	I1205 11:40:02.774423    9807 retry.go:31] will retry after 253.435894ms: ssh: handshake failed: read tcp 127.0.0.1:56686->127.0.0.1:56452: read: connection reset by peer
	W1205 11:40:03.060607    9807 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 11:40:03.060690    9807 ssh_runner.go:195] Run: systemctl --version
	I1205 11:40:03.062909    9807 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 11:40:03.064931    9807 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 11:40:03.064995    9807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1205 11:40:03.068587    9807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1205 11:40:03.073893    9807 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 11:40:03.073904    9807 start.go:495] detecting cgroup driver to use...
	I1205 11:40:03.074015    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:40:03.081662    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1205 11:40:03.085280    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 11:40:03.088717    9807 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 11:40:03.088759    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 11:40:03.092364    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:40:03.096190    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 11:40:03.099401    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:40:03.102443    9807 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 11:40:03.105499    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 11:40:03.108768    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 11:40:03.112244    9807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 11:40:03.115502    9807 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 11:40:03.118365    9807 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 11:40:03.121289    9807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:03.194400    9807 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 11:40:03.202987    9807 start.go:495] detecting cgroup driver to use...
	I1205 11:40:03.203080    9807 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 11:40:03.209180    9807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:40:03.214822    9807 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 11:40:03.227537    9807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:40:03.232235    9807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 11:40:03.236597    9807 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 11:40:03.288579    9807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 11:40:03.293939    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:40:03.299939    9807 ssh_runner.go:195] Run: which cri-dockerd
	I1205 11:40:03.301336    9807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 11:40:03.304843    9807 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1205 11:40:03.310363    9807 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 11:40:03.389197    9807 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 11:40:03.475661    9807 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 11:40:03.475735    9807 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 11:40:03.482004    9807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:03.566261    9807 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:40:04.699663    9807 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.133393084s)
	I1205 11:40:04.699757    9807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 11:40:04.704758    9807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:40:04.710313    9807 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 11:40:04.797865    9807 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 11:40:04.878090    9807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:04.955305    9807 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 11:40:04.960933    9807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:40:04.965876    9807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:05.059326    9807 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 11:40:05.102566    9807 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 11:40:05.102676    9807 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 11:40:05.106783    9807 start.go:563] Will wait 60s for crictl version
	I1205 11:40:05.106850    9807 ssh_runner.go:195] Run: which crictl
	I1205 11:40:05.108183    9807 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 11:40:05.124050    9807 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1205 11:40:05.124132    9807 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:40:05.140936    9807 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:40:05.161348    9807 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1205 11:40:05.161495    9807 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1205 11:40:05.162791    9807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 11:40:05.166689    9807 kubeadm.go:883] updating cluster {Name:stopped-upgrade-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1205 11:40:05.166735    9807 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:40:05.166787    9807 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:40:05.177612    9807 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:40:05.177621    9807 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:40:05.177677    9807 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:40:05.181149    9807 ssh_runner.go:195] Run: which lz4
	I1205 11:40:05.182586    9807 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 11:40:05.183821    9807 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 11:40:05.183838    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1205 11:40:06.072505    9807 docker.go:653] duration metric: took 889.978ms to copy over tarball
	I1205 11:40:06.072581    9807 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 11:40:07.248502    9807 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.175918833s)
	I1205 11:40:07.248518    9807 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 11:40:07.265138    9807 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:40:07.268989    9807 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1205 11:40:07.274464    9807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:07.355872    9807 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:40:08.928815    9807 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.572927042s)
	I1205 11:40:08.929184    9807 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:40:08.941025    9807 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:40:08.941036    9807 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:40:08.941042    9807 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 11:40:08.946877    9807 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:08.948796    9807 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 11:40:08.950888    9807 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:08.951004    9807 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:08.953748    9807 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:08.953523    9807 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 11:40:08.955215    9807 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:08.955211    9807 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:08.956607    9807 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:08.956631    9807 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:08.957914    9807 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:08.958033    9807 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:08.958880    9807 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:08.959611    9807 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:08.960778    9807 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:08.961160    9807 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:09.465567    9807 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1205 11:40:09.476943    9807 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1205 11:40:09.477509    9807 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1205 11:40:09.477569    9807 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1205 11:40:09.490381    9807 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1205 11:40:09.490579    9807 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1205 11:40:09.493086    9807 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1205 11:40:09.493112    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1205 11:40:09.500482    9807 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:09.505249    9807 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1205 11:40:09.505316    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1205 11:40:09.511891    9807 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:09.521075    9807 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1205 11:40:09.521102    9807 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:40:09.521175    9807 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W1205 11:40:09.534291    9807 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1205 11:40:09.534640    9807 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:09.556512    9807 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1205 11:40:09.556656    9807 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1205 11:40:09.556659    9807 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1205 11:40:09.556686    9807 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:09.556736    9807 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:40:09.556743    9807 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1205 11:40:09.556758    9807 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:09.556795    9807 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:40:09.569122    9807 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 11:40:09.569162    9807 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1205 11:40:09.569261    9807 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:40:09.570950    9807 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1205 11:40:09.570965    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1205 11:40:09.626098    9807 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:40:09.626153    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1205 11:40:09.672973    9807 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1205 11:40:09.707413    9807 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:09.718428    9807 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1205 11:40:09.718456    9807 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:09.718522    9807 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:40:09.728953    9807 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1205 11:40:09.792560    9807 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:09.803812    9807 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1205 11:40:09.803831    9807 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:09.803893    9807 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:40:09.814620    9807 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1205 11:40:09.882608    9807 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:09.893870    9807 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1205 11:40:09.893897    9807 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:09.893962    9807 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1205 11:40:09.904566    9807 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1205 11:40:10.297615    9807 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 11:40:10.297744    9807 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:10.309186    9807 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 11:40:10.309209    9807 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:10.309273    9807 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:40:10.324681    9807 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 11:40:10.324839    9807 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 11:40:10.326469    9807 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 11:40:10.326492    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 11:40:10.362107    9807 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 11:40:10.362123    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1205 11:40:10.614766    9807 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 11:40:10.614812    9807 cache_images.go:92] duration metric: took 1.67377225s to LoadCachedImages
	W1205 11:40:10.614867    9807 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I1205 11:40:10.614875    9807 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1205 11:40:10.614942    9807 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-050000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 11:40:10.615022    9807 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 11:40:10.631771    9807 cni.go:84] Creating CNI manager for ""
	I1205 11:40:10.631787    9807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:40:10.632049    9807 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 11:40:10.632065    9807 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-050000 NodeName:stopped-upgrade-050000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 11:40:10.632137    9807 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-050000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 11:40:10.632200    9807 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1205 11:40:10.635231    9807 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 11:40:10.635285    9807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 11:40:10.637857    9807 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1205 11:40:10.642967    9807 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 11:40:10.648148    9807 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1205 11:40:10.655081    9807 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1205 11:40:10.656271    9807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 11:40:10.659993    9807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:40:10.737955    9807 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:40:10.751037    9807 certs.go:68] Setting up /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000 for IP: 10.0.2.15
	I1205 11:40:10.751050    9807 certs.go:194] generating shared ca certs ...
	I1205 11:40:10.751060    9807 certs.go:226] acquiring lock for ca certs: {Name:mk649b36c637f895ef0e3cb84362644c97069221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:40:10.751517    9807 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.key
	I1205 11:40:10.751693    9807 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/proxy-client-ca.key
	I1205 11:40:10.751714    9807 certs.go:256] generating profile certs ...
	I1205 11:40:10.751956    9807 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/client.key
	I1205 11:40:10.751974    9807 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.key.0f9902cb
	I1205 11:40:10.751987    9807 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.crt.0f9902cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1205 11:40:10.808156    9807 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.crt.0f9902cb ...
	I1205 11:40:10.808169    9807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.crt.0f9902cb: {Name:mk82bdac370189c1ae3d9a4052b4c0915554ecbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:40:10.808522    9807 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.key.0f9902cb ...
	I1205 11:40:10.808528    9807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.key.0f9902cb: {Name:mkdb819126e13ca367d41b9b8cfe3019b2d58139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:40:10.808696    9807 certs.go:381] copying /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.crt.0f9902cb -> /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.crt
	I1205 11:40:10.808818    9807 certs.go:385] copying /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.key.0f9902cb -> /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.key
	I1205 11:40:10.809098    9807 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/proxy-client.key
	I1205 11:40:10.809262    9807 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/7922.pem (1338 bytes)
	W1205 11:40:10.809420    9807 certs.go:480] ignoring /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/7922_empty.pem, impossibly tiny 0 bytes
	I1205 11:40:10.809432    9807 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 11:40:10.809454    9807 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem (1078 bytes)
	I1205 11:40:10.809478    9807 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem (1123 bytes)
	I1205 11:40:10.809496    9807 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/key.pem (1679 bytes)
	I1205 11:40:10.809533    9807 certs.go:484] found cert: /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/ssl/certs/79222.pem (1708 bytes)
	I1205 11:40:10.810573    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 11:40:10.817997    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 11:40:10.825265    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 11:40:10.833116    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 11:40:10.840586    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 11:40:10.848002    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 11:40:10.855530    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 11:40:10.862614    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 11:40:10.869540    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/ssl/certs/79222.pem --> /usr/share/ca-certificates/79222.pem (1708 bytes)
	I1205 11:40:10.876749    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 11:40:10.884424    9807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/7922.pem --> /usr/share/ca-certificates/7922.pem (1338 bytes)
	I1205 11:40:10.892073    9807 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 11:40:10.897605    9807 ssh_runner.go:195] Run: openssl version
	I1205 11:40:10.899924    9807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79222.pem && ln -fs /usr/share/ca-certificates/79222.pem /etc/ssl/certs/79222.pem"
	I1205 11:40:10.903080    9807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79222.pem
	I1205 11:40:10.904482    9807 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:28 /usr/share/ca-certificates/79222.pem
	I1205 11:40:10.904523    9807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79222.pem
	I1205 11:40:10.906494    9807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/79222.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 11:40:10.909575    9807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 11:40:10.913078    9807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:40:10.914675    9807 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:40:10.914711    9807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:40:10.916668    9807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 11:40:10.920108    9807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7922.pem && ln -fs /usr/share/ca-certificates/7922.pem /etc/ssl/certs/7922.pem"
	I1205 11:40:10.923014    9807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7922.pem
	I1205 11:40:10.924499    9807 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:28 /usr/share/ca-certificates/7922.pem
	I1205 11:40:10.924536    9807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7922.pem
	I1205 11:40:10.926583    9807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7922.pem /etc/ssl/certs/51391683.0"
	I1205 11:40:10.929610    9807 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 11:40:10.931144    9807 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 11:40:10.933253    9807 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 11:40:10.935582    9807 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 11:40:10.937782    9807 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 11:40:10.939773    9807 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 11:40:10.941845    9807 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 11:40:10.944120    9807 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56484 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-050000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:40:10.944215    9807 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:40:10.959115    9807 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 11:40:10.962452    9807 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 11:40:10.962642    9807 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 11:40:10.962684    9807 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 11:40:10.965521    9807 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:40:10.965737    9807 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-050000" does not appear in /Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:40:10.965759    9807 kubeconfig.go:62] /Users/jenkins/minikube-integration/20053-7409/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-050000" cluster setting kubeconfig missing "stopped-upgrade-050000" context setting]
	I1205 11:40:10.965931    9807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/kubeconfig: {Name:mk997d47fa87fe6dec2166788b387274f153b2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:40:10.967661    9807 kapi.go:59] client config for stopped-upgrade-050000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/client.key", CAFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10248f740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:40:10.973448    9807 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 11:40:10.976919    9807 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-050000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1205 11:40:10.976927    9807 kubeadm.go:1160] stopping kube-system containers ...
	I1205 11:40:10.976990    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:40:10.988036    9807 docker.go:483] Stopping containers: [83e0bff0bfb4 1206af022e49 631dbdc8d1fd 9f8603e8ebee 38320caa6f92 b805d13791a1 e4403b247c8d 32a2c084975d]
	I1205 11:40:10.988109    9807 ssh_runner.go:195] Run: docker stop 83e0bff0bfb4 1206af022e49 631dbdc8d1fd 9f8603e8ebee 38320caa6f92 b805d13791a1 e4403b247c8d 32a2c084975d
	I1205 11:40:10.999179    9807 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 11:40:11.005056    9807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:40:11.008570    9807 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:40:11.008579    9807 kubeadm.go:157] found existing configuration files:
	
	I1205 11:40:11.008630    9807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/admin.conf
	I1205 11:40:11.011640    9807 kubeadm.go:163] "https://control-plane.minikube.internal:56484" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:40:11.011685    9807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:40:11.014479    9807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/kubelet.conf
	I1205 11:40:11.017233    9807 kubeadm.go:163] "https://control-plane.minikube.internal:56484" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:40:11.017279    9807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:40:11.020683    9807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/controller-manager.conf
	I1205 11:40:11.023804    9807 kubeadm.go:163] "https://control-plane.minikube.internal:56484" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:40:11.023855    9807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:40:11.026591    9807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/scheduler.conf
	I1205 11:40:11.029301    9807 kubeadm.go:163] "https://control-plane.minikube.internal:56484" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:40:11.029360    9807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:40:11.032790    9807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:40:11.036467    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:11.062758    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:11.584074    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:11.718359    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:11.744054    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:40:11.771634    9807 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:40:11.771725    9807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:40:12.273810    9807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:40:12.773843    9807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:40:12.782407    9807 api_server.go:72] duration metric: took 1.010781834s to wait for apiserver process to appear ...
	I1205 11:40:12.782417    9807 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:40:12.783174    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:17.784679    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:17.784776    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:22.785730    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:22.785828    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:27.786938    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:27.786965    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:32.787919    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:32.787958    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:37.789127    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:37.789159    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:42.790938    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:42.791008    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:47.793117    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:47.793221    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:52.796074    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:52.796164    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:40:57.798759    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:40:57.798811    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:02.799622    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:02.799730    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:07.802471    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:07.802583    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:12.803282    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:12.804694    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:41:12.832415    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:41:12.832560    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:41:12.864164    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:41:12.864261    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:41:12.876109    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:41:12.876183    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:41:12.887135    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:41:12.887219    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:41:12.897462    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:41:12.897537    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:41:12.910004    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:41:12.910084    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:41:12.926379    9807 logs.go:282] 0 containers: []
	W1205 11:41:12.926391    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:41:12.926456    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:41:12.937444    9807 logs.go:282] 0 containers: []
	W1205 11:41:12.937456    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:41:12.937470    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:41:12.937477    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:41:12.954073    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:41:12.954086    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:41:12.966161    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:41:12.966172    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:41:12.979611    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:41:12.979622    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:41:12.994916    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:41:12.994933    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:41:13.010326    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:41:13.010337    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:41:13.024648    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:41:13.024658    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:41:13.040582    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:41:13.040593    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:41:13.058147    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:41:13.058158    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:41:13.063850    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:41:13.063862    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:41:13.171508    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:41:13.171526    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:41:13.199401    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:41:13.199411    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:41:13.214894    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:41:13.214907    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:41:13.254311    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:41:13.254318    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:41:13.265957    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:41:13.265969    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:41:15.793289    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:20.795609    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:20.795857    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:41:20.816764    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:41:20.816872    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:41:20.831994    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:41:20.832081    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:41:20.844412    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:41:20.844493    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:41:20.855218    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:41:20.855286    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:41:20.866028    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:41:20.866103    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:41:20.885390    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:41:20.885468    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:41:20.896121    9807 logs.go:282] 0 containers: []
	W1205 11:41:20.896135    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:41:20.896204    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:41:20.906673    9807 logs.go:282] 0 containers: []
	W1205 11:41:20.906684    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:41:20.906693    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:41:20.906699    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:41:20.921669    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:41:20.921683    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:41:20.933597    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:41:20.933614    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:41:20.944805    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:41:20.944815    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:41:20.971331    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:41:20.971338    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:41:21.010186    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:41:21.010195    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:41:21.024459    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:41:21.024468    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:41:21.045084    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:41:21.045096    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:41:21.062511    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:41:21.062521    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:41:21.087848    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:41:21.087862    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:41:21.102019    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:41:21.102032    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:41:21.116144    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:41:21.116154    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:41:21.153069    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:41:21.153081    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:41:21.156860    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:41:21.156866    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:41:21.171337    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:41:21.171347    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:41:23.685788    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:28.687018    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:28.687257    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:41:28.703855    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:41:28.703950    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:41:28.715898    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:41:28.715968    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:41:28.726567    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:41:28.726636    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:41:28.737712    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:41:28.737799    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:41:28.748453    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:41:28.748533    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:41:28.759048    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:41:28.759131    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:41:28.769496    9807 logs.go:282] 0 containers: []
	W1205 11:41:28.769508    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:41:28.769574    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:41:28.779509    9807 logs.go:282] 0 containers: []
	W1205 11:41:28.779519    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:41:28.779526    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:41:28.779531    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:41:28.794284    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:41:28.794294    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:41:28.806047    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:41:28.806061    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:41:28.844555    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:41:28.844565    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:41:28.870188    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:41:28.870200    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:41:28.874688    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:41:28.874698    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:41:28.888438    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:41:28.888451    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:41:28.900021    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:41:28.900032    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:41:28.913491    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:41:28.913502    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:41:28.928605    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:41:28.928615    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:41:28.940313    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:41:28.940323    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:41:28.959764    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:41:28.959773    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:41:28.978038    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:41:28.978048    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:41:29.003049    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:41:29.003057    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:41:29.038577    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:41:29.038588    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:41:31.554788    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:36.556027    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:36.556333    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:41:36.571430    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:41:36.571511    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:41:36.583848    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:41:36.583939    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:41:36.594063    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:41:36.594146    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:41:36.607217    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:41:36.607293    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:41:36.620589    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:41:36.620667    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:41:36.631638    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:41:36.631716    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:41:36.641822    9807 logs.go:282] 0 containers: []
	W1205 11:41:36.641833    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:41:36.641897    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:41:36.651578    9807 logs.go:282] 0 containers: []
	W1205 11:41:36.651589    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:41:36.651597    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:41:36.651602    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:41:36.655888    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:41:36.655895    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:41:36.679941    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:41:36.679950    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:41:36.698822    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:41:36.698834    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:41:36.712552    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:41:36.712563    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:41:36.727214    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:41:36.727225    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:41:36.738455    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:41:36.738466    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:41:36.777942    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:41:36.777966    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:41:36.792162    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:41:36.792172    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:41:36.803688    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:41:36.803700    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:41:36.822984    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:41:36.822995    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:41:36.861352    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:41:36.861366    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:41:36.872829    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:41:36.872840    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:41:36.884762    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:41:36.884772    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:41:36.898260    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:41:36.898268    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:41:39.425795    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:44.428477    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:44.428823    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:41:44.455215    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:41:44.455345    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:41:44.472735    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:41:44.472841    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:41:44.485950    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:41:44.486030    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:41:44.497599    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:41:44.497676    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:41:44.509127    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:41:44.509214    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:41:44.519497    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:41:44.519576    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:41:44.529962    9807 logs.go:282] 0 containers: []
	W1205 11:41:44.529973    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:41:44.530039    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:41:44.539669    9807 logs.go:282] 0 containers: []
	W1205 11:41:44.539679    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:41:44.539688    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:41:44.539694    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:41:44.564424    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:41:44.564436    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:41:44.576197    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:41:44.576209    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:41:44.588247    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:41:44.588257    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:41:44.603756    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:41:44.603765    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:41:44.608429    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:41:44.608438    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:41:44.645855    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:41:44.645867    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:41:44.660110    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:41:44.660121    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:41:44.677555    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:41:44.677566    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:41:44.692266    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:41:44.692277    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:41:44.704113    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:41:44.704126    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:41:44.728702    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:41:44.728713    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:41:44.767200    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:41:44.767210    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:41:44.784923    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:41:44.784933    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:41:44.800574    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:41:44.800587    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:41:47.317592    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:41:52.319879    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:41:52.320171    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:41:52.343952    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:41:52.344092    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:41:52.365845    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:41:52.365935    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:41:52.377971    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:41:52.378048    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:41:52.388717    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:41:52.388796    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:41:52.399234    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:41:52.399305    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:41:52.422285    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:41:52.422364    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:41:52.433323    9807 logs.go:282] 0 containers: []
	W1205 11:41:52.433335    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:41:52.433401    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:41:52.447328    9807 logs.go:282] 0 containers: []
	W1205 11:41:52.447339    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:41:52.447347    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:41:52.447354    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:41:52.462465    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:41:52.462476    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:41:52.466788    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:41:52.466795    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:41:52.503167    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:41:52.503178    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:41:52.517367    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:41:52.517378    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:41:52.534646    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:41:52.534657    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:41:52.546723    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:41:52.546735    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:41:52.571685    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:41:52.571699    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:41:52.586318    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:41:52.586329    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:41:52.599695    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:41:52.599714    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:41:52.624534    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:41:52.624542    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:41:52.663580    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:41:52.663587    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:41:52.677298    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:41:52.677308    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:41:52.688753    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:41:52.688768    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:41:52.700698    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:41:52.700709    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:41:55.218712    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:00.220963    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:00.221299    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:00.248029    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:42:00.248156    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:00.265856    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:42:00.265941    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:00.285802    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:42:00.285892    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:00.296945    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:42:00.297026    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:00.307224    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:42:00.307314    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:00.317556    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:42:00.317636    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:00.327427    9807 logs.go:282] 0 containers: []
	W1205 11:42:00.327438    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:00.327507    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:00.337808    9807 logs.go:282] 0 containers: []
	W1205 11:42:00.337817    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:42:00.337824    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:00.337829    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:00.374255    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:42:00.374264    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:42:00.394580    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:42:00.394593    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:42:00.413774    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:00.413784    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:00.417759    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:42:00.417766    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:42:00.435107    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:00.435117    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:42:00.475970    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:42:00.475983    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:42:00.501507    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:42:00.501519    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:42:00.512814    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:42:00.512825    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:42:00.524334    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:00.524344    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:00.548925    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:42:00.548932    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:00.560262    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:42:00.560272    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:42:00.574581    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:42:00.574591    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:42:00.592096    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:42:00.592106    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:42:00.609459    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:42:00.609469    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:42:03.122830    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:08.125222    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:08.125453    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:08.143802    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:42:08.143908    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:08.157043    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:42:08.157136    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:08.170206    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:42:08.170285    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:08.181039    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:42:08.181122    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:08.191737    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:42:08.191814    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:08.201897    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:42:08.201970    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:08.217756    9807 logs.go:282] 0 containers: []
	W1205 11:42:08.217767    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:08.217838    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:08.227775    9807 logs.go:282] 0 containers: []
	W1205 11:42:08.227789    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:42:08.227796    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:08.227802    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:42:08.267585    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:42:08.267598    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:42:08.282116    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:42:08.282127    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:42:08.293045    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:42:08.293055    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:42:08.307376    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:08.307387    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:08.332227    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:42:08.332234    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:08.343897    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:08.343912    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:08.379219    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:42:08.379232    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:42:08.410219    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:42:08.410233    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:42:08.432718    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:42:08.432729    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:42:08.445762    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:42:08.445773    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:42:08.463437    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:42:08.463447    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:42:08.476923    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:08.476939    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:08.481147    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:42:08.481155    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:42:08.496089    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:42:08.496100    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:42:11.009878    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:16.012264    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:16.012488    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:16.026974    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:42:16.027095    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:16.038857    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:42:16.038940    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:16.049862    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:42:16.049935    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:16.060505    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:42:16.060584    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:16.070617    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:42:16.070693    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:16.081353    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:42:16.081430    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:16.091159    9807 logs.go:282] 0 containers: []
	W1205 11:42:16.091169    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:16.091228    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:16.101711    9807 logs.go:282] 0 containers: []
	W1205 11:42:16.101721    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:42:16.101731    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:42:16.101737    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:42:16.115576    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:42:16.115585    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:42:16.130314    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:42:16.130324    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:42:16.142255    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:42:16.142267    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:16.154506    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:16.154519    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:42:16.191470    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:16.191480    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:16.196889    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:16.196904    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:16.232113    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:42:16.232126    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:42:16.247150    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:42:16.247159    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:42:16.261049    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:42:16.261063    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:42:16.280915    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:42:16.280926    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:42:16.296019    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:42:16.296033    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:42:16.322166    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:42:16.322180    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:42:16.333614    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:42:16.333628    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:42:16.350741    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:16.350751    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:18.877573    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:23.880264    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:23.880886    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:23.920918    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:42:23.921089    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:23.942330    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:42:23.942454    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:23.957468    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:42:23.957561    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:23.969876    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:42:23.969960    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:23.980679    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:42:23.980756    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:23.995388    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:42:23.995463    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:24.005995    9807 logs.go:282] 0 containers: []
	W1205 11:42:24.006005    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:24.006069    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:24.016499    9807 logs.go:282] 0 containers: []
	W1205 11:42:24.016511    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:42:24.016519    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:24.016526    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:24.051462    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:42:24.051475    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:42:24.065710    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:42:24.065722    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:42:24.079798    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:24.079810    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:42:24.117534    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:42:24.117543    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:42:24.130342    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:24.130355    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:24.153490    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:42:24.153496    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:24.164667    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:42:24.164676    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:42:24.176554    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:24.176566    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:24.181013    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:42:24.181021    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:42:24.206600    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:42:24.206614    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:42:24.221684    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:42:24.221695    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:42:24.237567    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:42:24.237577    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:42:24.257689    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:42:24.257699    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:42:24.269680    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:42:24.269692    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:42:26.788149    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:31.789964    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:31.790091    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:31.805044    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:42:31.805138    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:31.816780    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:42:31.816863    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:31.827711    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:42:31.827792    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:31.838244    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:42:31.838318    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:31.858681    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:42:31.858765    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:31.870969    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:42:31.871044    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:31.882348    9807 logs.go:282] 0 containers: []
	W1205 11:42:31.882360    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:31.882435    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:31.892265    9807 logs.go:282] 0 containers: []
	W1205 11:42:31.892277    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:42:31.892284    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:42:31.892290    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:42:31.906248    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:42:31.906258    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:42:31.920736    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:42:31.920745    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:42:31.934885    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:31.934895    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:31.958768    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:42:31.958778    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:31.971342    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:42:31.971358    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:42:32.006154    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:42:32.006167    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:42:32.017587    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:42:32.017597    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:42:32.028916    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:42:32.028926    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:42:32.046171    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:32.046181    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:32.079853    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:42:32.079863    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:42:32.093946    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:42:32.093961    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:42:32.109458    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:32.109469    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:42:32.146946    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:32.146956    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:32.150887    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:42:32.150893    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:42:34.667459    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:39.669747    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:39.669977    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:39.687685    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:42:39.687788    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:39.704056    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:42:39.704135    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:39.718818    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:42:39.718898    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:39.729417    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:42:39.729497    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:39.739918    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:42:39.739991    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:39.756169    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:42:39.756238    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:39.766273    9807 logs.go:282] 0 containers: []
	W1205 11:42:39.766283    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:39.766346    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:39.776845    9807 logs.go:282] 0 containers: []
	W1205 11:42:39.776858    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:42:39.776869    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:39.776879    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:42:39.816519    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:39.816528    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:39.850969    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:42:39.850983    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:42:39.875905    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:42:39.875914    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:42:39.896601    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:42:39.896612    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:42:39.910625    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:42:39.910634    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:42:39.924649    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:39.924659    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:39.949514    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:42:39.949524    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:39.960982    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:42:39.960993    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:42:39.972412    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:42:39.972424    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:42:39.987423    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:42:39.987433    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:42:40.000704    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:42:40.000714    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:42:40.018223    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:40.018232    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:40.022613    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:42:40.022618    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:42:40.037189    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:42:40.037198    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:42:42.550718    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:47.553031    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:47.553199    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:47.565227    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:42:47.565313    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:47.576564    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:42:47.576665    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:47.594807    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:42:47.594879    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:47.605138    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:42:47.605215    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:47.615787    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:42:47.615872    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:47.627278    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:42:47.627359    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:47.638462    9807 logs.go:282] 0 containers: []
	W1205 11:42:47.638472    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:47.638538    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:47.648856    9807 logs.go:282] 0 containers: []
	W1205 11:42:47.648867    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:42:47.648875    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:47.648881    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:42:47.686399    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:42:47.686407    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:42:47.700058    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:42:47.700073    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:42:47.714884    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:42:47.714895    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:42:47.731274    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:47.731283    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:47.767783    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:42:47.767797    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:42:47.781975    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:42:47.781985    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:42:47.793296    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:42:47.793308    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:42:47.806175    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:42:47.806185    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:42:47.819788    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:42:47.819797    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:42:47.844894    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:42:47.844904    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:42:47.859415    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:42:47.859424    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:42:47.878679    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:47.878692    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:47.882845    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:47.882853    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:47.908450    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:42:47.908466    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:50.424259    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:42:55.425960    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:42:55.426479    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:42:55.465934    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:42:55.466095    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:42:55.485717    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:42:55.485831    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:42:55.500820    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:42:55.500920    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:42:55.515681    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:42:55.515766    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:42:55.527118    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:42:55.527204    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:42:55.543124    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:42:55.543214    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:42:55.557758    9807 logs.go:282] 0 containers: []
	W1205 11:42:55.557791    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:42:55.557866    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:42:55.576987    9807 logs.go:282] 0 containers: []
	W1205 11:42:55.576999    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:42:55.577008    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:42:55.577014    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:42:55.615883    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:42:55.615893    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:42:55.630709    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:42:55.630722    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:42:55.634919    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:42:55.634926    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:42:55.672124    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:42:55.672138    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:42:55.690688    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:42:55.690701    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:42:55.702506    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:42:55.702517    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:42:55.716665    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:42:55.716675    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:42:55.731428    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:42:55.731444    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:42:55.747831    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:42:55.747844    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:42:55.762779    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:42:55.762790    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:42:55.788216    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:42:55.788223    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:42:55.800346    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:42:55.800359    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:42:55.826972    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:42:55.826984    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:42:55.846228    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:42:55.846241    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:42:58.360342    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:03.363089    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:03.363598    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:03.413023    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:43:03.413171    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:03.432567    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:43:03.432679    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:03.447039    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:43:03.447136    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:03.459791    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:43:03.459872    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:03.470638    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:43:03.470724    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:03.481873    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:43:03.481946    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:03.492483    9807 logs.go:282] 0 containers: []
	W1205 11:43:03.492495    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:03.492559    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:03.506953    9807 logs.go:282] 0 containers: []
	W1205 11:43:03.506963    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:43:03.506972    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:43:03.506978    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:43:03.522209    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:43:03.522221    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:43:03.534740    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:43:03.534753    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:43:03.545804    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:43:03.545817    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:43:03.563070    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:43:03.563078    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:03.574599    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:03.574608    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:03.578819    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:43:03.578827    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:43:03.593388    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:43:03.593402    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:43:03.605192    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:43:03.605205    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:43:03.622820    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:43:03.622834    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:43:03.637590    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:03.637604    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:43:03.676337    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:03.676345    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:03.710585    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:43:03.710596    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:43:03.725930    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:43:03.725943    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:43:03.751656    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:03.751666    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:06.277169    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:11.279462    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:11.279733    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:11.301622    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:43:11.301738    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:11.320582    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:43:11.320657    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:11.333064    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:43:11.333141    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:11.343802    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:43:11.343869    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:11.358666    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:43:11.358751    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:11.378479    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:43:11.378562    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:11.388694    9807 logs.go:282] 0 containers: []
	W1205 11:43:11.388704    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:11.388771    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:11.399249    9807 logs.go:282] 0 containers: []
	W1205 11:43:11.399263    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:43:11.399271    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:11.399276    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:11.433707    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:43:11.433722    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:43:11.452320    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:11.452330    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:43:11.490796    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:43:11.490806    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:43:11.504616    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:43:11.504628    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:43:11.515931    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:43:11.515943    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:43:11.530641    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:43:11.530651    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:43:11.542545    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:11.542558    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:11.546713    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:43:11.546721    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:43:11.560235    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:43:11.560245    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:43:11.572451    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:43:11.572464    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:43:11.598897    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:43:11.598907    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:43:11.613837    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:43:11.613848    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:43:11.633990    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:11.634001    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:11.657026    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:43:11.657036    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:14.170760    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:19.173485    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:19.173814    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:19.200995    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:43:19.201131    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:19.226846    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:43:19.226937    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:19.238924    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:43:19.239012    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:19.249231    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:43:19.249307    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:19.260831    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:43:19.260917    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:19.271819    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:43:19.271896    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:19.281882    9807 logs.go:282] 0 containers: []
	W1205 11:43:19.281896    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:19.281958    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:19.291904    9807 logs.go:282] 0 containers: []
	W1205 11:43:19.291917    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:43:19.291925    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:19.291931    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:19.327517    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:43:19.327527    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:43:19.341129    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:43:19.341144    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:43:19.361478    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:19.361488    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:43:19.398751    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:19.398762    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:19.402887    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:43:19.402895    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:43:19.417289    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:43:19.417299    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:43:19.429446    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:43:19.429457    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:43:19.441170    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:43:19.441182    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:19.453477    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:43:19.453487    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:43:19.467926    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:43:19.467935    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:43:19.493481    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:43:19.493490    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:43:19.504612    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:43:19.504623    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:43:19.519569    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:43:19.519579    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:43:19.537626    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:19.537637    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:22.063128    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:27.065404    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:27.065588    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:27.077861    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:43:27.077945    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:27.088002    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:43:27.088080    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:27.098842    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:43:27.098925    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:27.109235    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:43:27.109308    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:27.119840    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:43:27.119918    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:27.130438    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:43:27.130510    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:27.140875    9807 logs.go:282] 0 containers: []
	W1205 11:43:27.140886    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:27.140956    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:27.151078    9807 logs.go:282] 0 containers: []
	W1205 11:43:27.151091    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:43:27.151099    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:27.151106    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:43:27.189969    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:43:27.189977    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:43:27.206978    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:43:27.206990    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:43:27.218805    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:43:27.218817    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:43:27.233577    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:43:27.233588    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:43:27.251749    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:43:27.251759    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:43:27.288294    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:27.288305    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:27.311184    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:43:27.311191    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:43:27.340216    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:43:27.340227    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:43:27.351408    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:43:27.351419    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:27.368448    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:27.368462    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:27.373085    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:27.373093    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:27.407664    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:43:27.407677    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:43:27.422154    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:43:27.422166    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:43:27.438031    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:43:27.438042    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:43:29.951601    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:34.953270    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:34.953541    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:34.978914    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:43:34.979023    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:35.006042    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:43:35.006126    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:35.031870    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:43:35.031958    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:35.047777    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:43:35.047849    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:35.058245    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:43:35.058314    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:35.068649    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:43:35.068715    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:35.083638    9807 logs.go:282] 0 containers: []
	W1205 11:43:35.083652    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:35.083724    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:35.093595    9807 logs.go:282] 0 containers: []
	W1205 11:43:35.093604    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:43:35.093613    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:35.093618    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:35.128629    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:43:35.128642    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:43:35.143131    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:43:35.143141    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:43:35.154474    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:43:35.154485    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:43:35.168739    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:43:35.168753    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:43:35.180472    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:43:35.180482    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:43:35.198342    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:35.198352    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:35.222621    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:35.222630    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:35.226882    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:43:35.226889    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:35.239042    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:43:35.239052    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:43:35.265739    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:43:35.265748    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:43:35.281267    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:35.281277    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:43:35.318484    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:43:35.318492    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:43:35.335537    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:43:35.335547    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:43:35.347073    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:43:35.347083    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:43:37.864162    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:42.866809    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:42.867041    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:42.883589    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:43:42.883678    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:42.897327    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:43:42.897402    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:42.908121    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:43:42.908195    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:42.918859    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:43:42.918934    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:42.929189    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:43:42.929268    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:42.943296    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:43:42.943366    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:42.960323    9807 logs.go:282] 0 containers: []
	W1205 11:43:42.960335    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:42.960399    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:42.972285    9807 logs.go:282] 0 containers: []
	W1205 11:43:42.972296    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:43:42.972306    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:42.972313    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:43.007375    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:43:43.007388    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:43:43.020049    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:43.020059    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:43.043076    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:43:43.043085    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:43:43.060898    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:43:43.060909    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:43.072715    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:43:43.072726    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:43:43.087925    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:43:43.087934    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:43:43.119878    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:43:43.119892    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:43:43.135598    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:43:43.135611    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:43:43.147727    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:43:43.147742    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:43:43.160278    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:43:43.160288    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:43:43.174060    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:43.174068    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:43:43.211249    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:43.211264    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:43.215464    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:43:43.215471    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:43:43.234468    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:43:43.234482    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:43:45.750966    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:50.753674    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:50.753902    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:50.775717    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:43:50.775830    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:50.791651    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:43:50.791745    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:50.805024    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:43:50.805114    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:50.815887    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:43:50.815962    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:50.826159    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:43:50.826233    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:50.836337    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:43:50.836410    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:50.846584    9807 logs.go:282] 0 containers: []
	W1205 11:43:50.846596    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:50.846671    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:50.856663    9807 logs.go:282] 0 containers: []
	W1205 11:43:50.856674    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:43:50.856682    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:43:50.856687    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:43:50.870497    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:43:50.870509    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:43:50.884867    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:43:50.884880    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:43:50.900020    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:43:50.900031    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:43:50.911742    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:50.911754    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:43:50.949682    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:50.949690    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:50.983490    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:43:50.983500    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:43:50.999073    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:43:50.999084    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:43:51.010481    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:51.010493    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:51.034388    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:43:51.034395    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:51.045883    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:51.045893    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:51.050011    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:43:51.050021    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:43:51.079259    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:43:51.079275    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:43:51.090763    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:43:51.090775    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:43:51.110923    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:43:51.110932    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:43:53.625831    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:43:58.628015    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:43:58.628327    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:43:58.652021    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:43:58.652156    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:43:58.668564    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:43:58.668652    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:43:58.681126    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:43:58.681209    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:43:58.691707    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:43:58.691793    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:43:58.702335    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:43:58.702412    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:43:58.712801    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:43:58.712874    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:43:58.723298    9807 logs.go:282] 0 containers: []
	W1205 11:43:58.723312    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:43:58.723376    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:43:58.734025    9807 logs.go:282] 0 containers: []
	W1205 11:43:58.734037    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:43:58.734045    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:43:58.734052    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:43:58.750610    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:43:58.750624    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:43:58.768108    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:43:58.768119    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:43:58.784899    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:43:58.784913    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:43:58.798725    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:43:58.798738    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:43:58.821548    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:43:58.821557    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:43:58.825443    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:43:58.825449    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:43:58.839242    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:43:58.839252    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:43:58.864265    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:43:58.864274    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:43:58.878592    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:43:58.878603    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:43:58.889503    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:43:58.889514    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:43:58.901229    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:43:58.901239    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:43:58.913972    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:43:58.913981    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:43:58.954378    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:43:58.954387    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:43:58.991359    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:43:58.991372    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:44:01.510328    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:06.512625    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:06.512844    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:44:06.528755    9807 logs.go:282] 2 containers: [2fb26c9c0858 9f8603e8ebee]
	I1205 11:44:06.528849    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:44:06.540669    9807 logs.go:282] 2 containers: [53199fb72561 83e0bff0bfb4]
	I1205 11:44:06.540752    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:44:06.551873    9807 logs.go:282] 1 containers: [7b0bdbcb58e2]
	I1205 11:44:06.551952    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:44:06.562494    9807 logs.go:282] 2 containers: [185d476ece32 1206af022e49]
	I1205 11:44:06.562570    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:44:06.572866    9807 logs.go:282] 1 containers: [8510a49668f0]
	I1205 11:44:06.572957    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:44:06.583482    9807 logs.go:282] 2 containers: [08b32d9ffe64 631dbdc8d1fd]
	I1205 11:44:06.583554    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:44:06.593239    9807 logs.go:282] 0 containers: []
	W1205 11:44:06.593250    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:44:06.593311    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:44:06.605437    9807 logs.go:282] 0 containers: []
	W1205 11:44:06.605447    9807 logs.go:284] No container was found matching "storage-provisioner"
	I1205 11:44:06.605455    9807 logs.go:123] Gathering logs for coredns [7b0bdbcb58e2] ...
	I1205 11:44:06.605463    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b0bdbcb58e2"
	I1205 11:44:06.616867    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:44:06.616878    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:44:06.653488    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:44:06.653496    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:44:06.657532    9807 logs.go:123] Gathering logs for etcd [53199fb72561] ...
	I1205 11:44:06.657540    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53199fb72561"
	I1205 11:44:06.671925    9807 logs.go:123] Gathering logs for kube-controller-manager [08b32d9ffe64] ...
	I1205 11:44:06.671934    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08b32d9ffe64"
	I1205 11:44:06.693629    9807 logs.go:123] Gathering logs for kube-controller-manager [631dbdc8d1fd] ...
	I1205 11:44:06.693639    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 631dbdc8d1fd"
	I1205 11:44:06.707234    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:44:06.707245    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:44:06.719094    9807 logs.go:123] Gathering logs for kube-scheduler [185d476ece32] ...
	I1205 11:44:06.719103    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 185d476ece32"
	I1205 11:44:06.731202    9807 logs.go:123] Gathering logs for kube-scheduler [1206af022e49] ...
	I1205 11:44:06.731213    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1206af022e49"
	I1205 11:44:06.746787    9807 logs.go:123] Gathering logs for kube-proxy [8510a49668f0] ...
	I1205 11:44:06.746798    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8510a49668f0"
	I1205 11:44:06.758689    9807 logs.go:123] Gathering logs for kube-apiserver [2fb26c9c0858] ...
	I1205 11:44:06.758701    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fb26c9c0858"
	I1205 11:44:06.772733    9807 logs.go:123] Gathering logs for kube-apiserver [9f8603e8ebee] ...
	I1205 11:44:06.772744    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f8603e8ebee"
	I1205 11:44:06.797532    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:44:06.797542    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:44:06.834355    9807 logs.go:123] Gathering logs for etcd [83e0bff0bfb4] ...
	I1205 11:44:06.834369    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83e0bff0bfb4"
	I1205 11:44:06.849210    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:44:06.849222    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:44:09.375496    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:14.377784    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:14.378089    9807 kubeadm.go:597] duration metric: took 4m3.417639417s to restartPrimaryControlPlane
	W1205 11:44:14.378264    9807 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 11:44:14.378329    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 11:44:15.355993    9807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 11:44:15.361168    9807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:44:15.364073    9807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:44:15.366980    9807 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:44:15.366986    9807 kubeadm.go:157] found existing configuration files:
	
	I1205 11:44:15.367014    9807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/admin.conf
	I1205 11:44:15.370134    9807 kubeadm.go:163] "https://control-plane.minikube.internal:56484" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:44:15.370164    9807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:44:15.372778    9807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/kubelet.conf
	I1205 11:44:15.375441    9807 kubeadm.go:163] "https://control-plane.minikube.internal:56484" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:44:15.375465    9807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:44:15.378676    9807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/controller-manager.conf
	I1205 11:44:15.381639    9807 kubeadm.go:163] "https://control-plane.minikube.internal:56484" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:44:15.381673    9807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:44:15.384372    9807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/scheduler.conf
	I1205 11:44:15.387334    9807 kubeadm.go:163] "https://control-plane.minikube.internal:56484" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56484 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:44:15.387363    9807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:44:15.390719    9807 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 11:44:15.410057    9807 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1205 11:44:15.410123    9807 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 11:44:15.458672    9807 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 11:44:15.458726    9807 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 11:44:15.458810    9807 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 11:44:15.507361    9807 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 11:44:15.510713    9807 out.go:235]   - Generating certificates and keys ...
	I1205 11:44:15.510751    9807 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 11:44:15.510781    9807 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 11:44:15.510827    9807 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 11:44:15.510857    9807 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 11:44:15.510891    9807 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 11:44:15.510914    9807 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 11:44:15.510947    9807 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 11:44:15.510987    9807 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 11:44:15.511023    9807 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 11:44:15.511071    9807 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 11:44:15.511091    9807 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 11:44:15.511121    9807 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 11:44:15.608673    9807 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 11:44:15.660676    9807 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 11:44:15.886254    9807 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 11:44:16.078413    9807 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 11:44:16.108671    9807 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 11:44:16.109561    9807 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 11:44:16.109597    9807 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 11:44:16.192470    9807 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 11:44:16.195706    9807 out.go:235]   - Booting up control plane ...
	I1205 11:44:16.195752    9807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 11:44:16.195811    9807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 11:44:16.195854    9807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 11:44:16.195895    9807 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 11:44:16.195985    9807 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 11:44:20.698855    9807 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505080 seconds
	I1205 11:44:20.699075    9807 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 11:44:20.707718    9807 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 11:44:21.220311    9807 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 11:44:21.220453    9807 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-050000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 11:44:21.726729    9807 kubeadm.go:310] [bootstrap-token] Using token: thydrl.wsf2gt9d07u1ey43
	I1205 11:44:21.729650    9807 out.go:235]   - Configuring RBAC rules ...
	I1205 11:44:21.729733    9807 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 11:44:21.729851    9807 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 11:44:21.737872    9807 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 11:44:21.739240    9807 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 11:44:21.740496    9807 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 11:44:21.741920    9807 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 11:44:21.746072    9807 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 11:44:21.938678    9807 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 11:44:22.131310    9807 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 11:44:22.131928    9807 kubeadm.go:310] 
	I1205 11:44:22.131957    9807 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 11:44:22.131963    9807 kubeadm.go:310] 
	I1205 11:44:22.132000    9807 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 11:44:22.132043    9807 kubeadm.go:310] 
	I1205 11:44:22.132058    9807 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 11:44:22.132111    9807 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 11:44:22.132143    9807 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 11:44:22.132166    9807 kubeadm.go:310] 
	I1205 11:44:22.132196    9807 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 11:44:22.132201    9807 kubeadm.go:310] 
	I1205 11:44:22.132229    9807 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 11:44:22.132233    9807 kubeadm.go:310] 
	I1205 11:44:22.132283    9807 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 11:44:22.132324    9807 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 11:44:22.132369    9807 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 11:44:22.132393    9807 kubeadm.go:310] 
	I1205 11:44:22.132495    9807 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 11:44:22.132641    9807 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 11:44:22.132645    9807 kubeadm.go:310] 
	I1205 11:44:22.132684    9807 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token thydrl.wsf2gt9d07u1ey43 \
	I1205 11:44:22.132775    9807 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4d1b162eb3608111c477a7c870488ffbf3cfc36b3f1c56af279a8c3b5e43f1b \
	I1205 11:44:22.132788    9807 kubeadm.go:310] 	--control-plane 
	I1205 11:44:22.132790    9807 kubeadm.go:310] 
	I1205 11:44:22.132833    9807 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 11:44:22.132835    9807 kubeadm.go:310] 
	I1205 11:44:22.132887    9807 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token thydrl.wsf2gt9d07u1ey43 \
	I1205 11:44:22.132940    9807 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4d1b162eb3608111c477a7c870488ffbf3cfc36b3f1c56af279a8c3b5e43f1b 
	I1205 11:44:22.132993    9807 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 11:44:22.133000    9807 cni.go:84] Creating CNI manager for ""
	I1205 11:44:22.133008    9807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:44:22.136616    9807 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 11:44:22.142700    9807 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 11:44:22.145899    9807 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 11:44:22.150681    9807 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 11:44:22.150744    9807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 11:44:22.150932    9807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-050000 minikube.k8s.io/updated_at=2024_12_05T11_44_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=stopped-upgrade-050000 minikube.k8s.io/primary=true
	I1205 11:44:22.197080    9807 kubeadm.go:1113] duration metric: took 46.386958ms to wait for elevateKubeSystemPrivileges
	I1205 11:44:22.197092    9807 ops.go:34] apiserver oom_adj: -16
	I1205 11:44:22.208082    9807 kubeadm.go:394] duration metric: took 4m11.266244625s to StartCluster
	I1205 11:44:22.208099    9807 settings.go:142] acquiring lock: {Name:mk929d066faf20e4c3c6b7a024ba4d845a405894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:44:22.208285    9807 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:44:22.209025    9807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/kubeconfig: {Name:mk997d47fa87fe6dec2166788b387274f153b2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:44:22.209393    9807 config.go:182] Loaded profile config "stopped-upgrade-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:44:22.209496    9807 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:44:22.209558    9807 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 11:44:22.209598    9807 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-050000"
	I1205 11:44:22.209607    9807 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-050000"
	W1205 11:44:22.209611    9807 addons.go:243] addon storage-provisioner should already be in state true
	I1205 11:44:22.209616    9807 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-050000"
	I1205 11:44:22.209622    9807 host.go:66] Checking if "stopped-upgrade-050000" exists ...
	I1205 11:44:22.209639    9807 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-050000"
	I1205 11:44:22.213647    9807 out.go:177] * Verifying Kubernetes components...
	I1205 11:44:22.214361    9807 kapi.go:59] client config for stopped-upgrade-050000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/stopped-upgrade-050000/client.key", CAFile:"/Users/jenkins/minikube-integration/20053-7409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10248f740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:44:22.218794    9807 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-050000"
	W1205 11:44:22.218801    9807 addons.go:243] addon default-storageclass should already be in state true
	I1205 11:44:22.218808    9807 host.go:66] Checking if "stopped-upgrade-050000" exists ...
	I1205 11:44:22.219518    9807 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 11:44:22.219525    9807 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 11:44:22.219530    9807 sshutil.go:53] new ssh client: &{IP:localhost Port:56452 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/id_rsa Username:docker}
	I1205 11:44:22.221603    9807 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:44:22.225667    9807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:44:22.228582    9807 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:44:22.228588    9807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 11:44:22.228594    9807 sshutil.go:53] new ssh client: &{IP:localhost Port:56452 SSHKeyPath:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/stopped-upgrade-050000/id_rsa Username:docker}
	I1205 11:44:22.316353    9807 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:44:22.321484    9807 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:44:22.321544    9807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:44:22.325371    9807 api_server.go:72] duration metric: took 115.864625ms to wait for apiserver process to appear ...
	I1205 11:44:22.325379    9807 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:44:22.325386    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:22.344938    9807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 11:44:22.355876    9807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:44:22.695080    9807 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 11:44:22.695091    9807 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 11:44:27.327458    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:27.327520    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:32.328205    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:32.328277    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:37.328884    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:37.328907    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:42.329479    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:42.329505    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:47.330286    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:47.330308    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:44:52.331249    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:52.331275    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1205 11:44:52.697777    9807 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1205 11:44:52.702163    9807 out.go:177] * Enabled addons: storage-provisioner
	I1205 11:44:52.713029    9807 addons.go:510] duration metric: took 30.503934917s for enable addons: enabled=[storage-provisioner]
	I1205 11:44:57.332540    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:44:57.332580    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:02.334255    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:02.334295    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:07.335700    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:07.335743    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:12.337357    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:12.337376    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:17.339512    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:17.339559    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:22.341830    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:22.341947    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:45:22.352588    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:45:22.352670    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:45:22.362789    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:45:22.362870    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:45:22.373003    9807 logs.go:282] 2 containers: [c1a00b201d69 a169e0bf6832]
	I1205 11:45:22.373076    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:45:22.383174    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:45:22.383255    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:45:22.395366    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:45:22.395443    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:45:22.405795    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:45:22.405869    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:45:22.416111    9807 logs.go:282] 0 containers: []
	W1205 11:45:22.416127    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:45:22.416196    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:45:22.426619    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:45:22.426633    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:45:22.426639    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:45:22.445583    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:45:22.445594    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:45:22.457595    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:45:22.457605    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:45:22.475166    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:45:22.475176    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:45:22.486593    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:45:22.486603    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:45:22.518701    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:45:22.518795    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:45:22.519716    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:45:22.519721    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:45:22.523752    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:45:22.523761    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:45:22.560960    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:45:22.560971    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:45:22.572680    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:45:22.572692    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:45:22.584775    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:45:22.584787    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:45:22.600691    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:45:22.600701    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:45:22.615884    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:45:22.615895    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:45:22.631775    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:45:22.631785    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:45:22.656692    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:45:22.656705    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:45:22.656732    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:45:22.656736    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:45:22.656744    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:45:22.656748    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:45:22.656750    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:45:32.660794    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:37.662951    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:37.663068    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:45:37.674413    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:45:37.674494    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:45:37.684830    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:45:37.684907    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:45:37.695436    9807 logs.go:282] 2 containers: [c1a00b201d69 a169e0bf6832]
	I1205 11:45:37.695509    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:45:37.705460    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:45:37.705531    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:45:37.715757    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:45:37.715849    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:45:37.726184    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:45:37.726252    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:45:37.736402    9807 logs.go:282] 0 containers: []
	W1205 11:45:37.736415    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:45:37.736484    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:45:37.746331    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:45:37.746347    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:45:37.746352    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:45:37.783950    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:45:37.783966    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:45:37.798425    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:45:37.798442    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:45:37.811037    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:45:37.811050    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:45:37.825501    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:45:37.825512    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:45:37.840333    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:45:37.840343    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:45:37.859604    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:45:37.859614    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:45:37.872520    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:45:37.872535    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:45:37.907746    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:45:37.907841    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:45:37.908777    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:45:37.908782    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:45:37.913496    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:45:37.913502    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:45:37.926922    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:45:37.926932    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:45:37.938424    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:45:37.938436    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:45:37.955892    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:45:37.955902    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:45:37.980969    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:45:37.980977    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:45:37.981003    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:45:37.981007    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:45:37.981010    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:45:37.981026    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:45:37.981030    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:45:47.984556    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:45:52.986561    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:45:52.986821    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:45:53.008966    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:45:53.009054    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:45:53.023519    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:45:53.023609    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:45:53.036726    9807 logs.go:282] 2 containers: [c1a00b201d69 a169e0bf6832]
	I1205 11:45:53.036795    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:45:53.047333    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:45:53.047414    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:45:53.064542    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:45:53.064621    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:45:53.075451    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:45:53.075530    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:45:53.085345    9807 logs.go:282] 0 containers: []
	W1205 11:45:53.085358    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:45:53.085426    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:45:53.096454    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:45:53.096492    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:45:53.096498    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:45:53.113421    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:45:53.113435    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:45:53.126858    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:45:53.126870    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:45:53.142296    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:45:53.142311    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:45:53.155081    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:45:53.155091    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:45:53.172405    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:45:53.172415    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:45:53.184248    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:45:53.184258    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:45:53.220986    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:45:53.220997    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:45:53.235756    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:45:53.235767    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:45:53.260460    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:45:53.260471    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:45:53.272285    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:45:53.272296    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:45:53.283420    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:45:53.283430    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:45:53.315122    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:45:53.315215    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:45:53.316118    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:45:53.316122    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:45:53.320031    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:45:53.320042    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:45:53.320065    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:45:53.320069    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:45:53.320074    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:45:53.320090    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:45:53.320094    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:46:03.321729    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:46:08.324073    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:46:08.324330    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:46:08.343643    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:46:08.343761    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:46:08.358528    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:46:08.358618    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:46:08.370658    9807 logs.go:282] 2 containers: [c1a00b201d69 a169e0bf6832]
	I1205 11:46:08.370734    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:46:08.381608    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:46:08.381676    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:46:08.391909    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:46:08.391984    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:46:08.401639    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:46:08.401699    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:46:08.411299    9807 logs.go:282] 0 containers: []
	W1205 11:46:08.411311    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:46:08.411375    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:46:08.421620    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:46:08.421635    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:46:08.421642    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:46:08.457471    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:46:08.457486    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:46:08.474081    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:46:08.474091    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:46:08.489016    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:46:08.489027    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:46:08.500279    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:46:08.500293    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:46:08.518200    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:46:08.518212    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:46:08.542593    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:46:08.542600    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:46:08.576259    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:46:08.576352    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:46:08.577304    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:46:08.577308    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:46:08.582024    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:46:08.582032    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:46:08.593383    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:46:08.593394    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:46:08.606236    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:46:08.606247    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:46:08.618406    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:46:08.618417    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:46:08.632873    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:46:08.632884    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:46:08.648736    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:08.648745    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:46:08.648770    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:46:08.648774    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:46:08.648777    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:46:08.648781    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:08.648784    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:46:18.651751    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:46:23.654174    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:46:23.654430    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:46:23.680806    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:46:23.680911    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:46:23.697396    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:46:23.697482    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:46:23.714696    9807 logs.go:282] 2 containers: [c1a00b201d69 a169e0bf6832]
	I1205 11:46:23.714766    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:46:23.725817    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:46:23.725888    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:46:23.736609    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:46:23.736688    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:46:23.749754    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:46:23.749816    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:46:23.760409    9807 logs.go:282] 0 containers: []
	W1205 11:46:23.760419    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:46:23.760486    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:46:23.770780    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:46:23.770797    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:46:23.770803    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:46:23.784618    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:46:23.784629    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:46:23.808520    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:46:23.808534    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:46:23.843712    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:46:23.843728    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:46:23.857450    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:46:23.857461    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:46:23.869090    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:46:23.869101    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:46:23.880259    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:46:23.880269    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:46:23.895610    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:46:23.895622    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:46:23.906906    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:46:23.906922    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:46:23.940134    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:46:23.940234    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:46:23.941188    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:46:23.941194    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:46:23.946085    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:46:23.946095    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:46:23.960850    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:46:23.960861    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:46:23.972989    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:46:23.972999    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:46:23.990583    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:23.990593    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:46:23.990622    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:46:23.990626    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:46:23.990630    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:46:23.990633    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:23.990636    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:46:33.994685    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:46:38.996765    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:46:38.996948    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:46:39.011978    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:46:39.012066    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:46:39.024348    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:46:39.024430    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:46:39.039535    9807 logs.go:282] 4 containers: [c3470a3fec9e 99dfd61d76ba c1a00b201d69 a169e0bf6832]
	I1205 11:46:39.039618    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:46:39.050048    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:46:39.050132    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:46:39.063730    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:46:39.063807    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:46:39.073946    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:46:39.074030    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:46:39.084541    9807 logs.go:282] 0 containers: []
	W1205 11:46:39.084551    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:46:39.084612    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:46:39.094907    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:46:39.094925    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:46:39.094930    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:46:39.109041    9807 logs.go:123] Gathering logs for coredns [c3470a3fec9e] ...
	I1205 11:46:39.109055    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3470a3fec9e"
	I1205 11:46:39.121121    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:46:39.121132    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:46:39.132964    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:46:39.132976    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:46:39.145707    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:46:39.145719    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:46:39.178459    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:46:39.178553    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:46:39.179522    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:46:39.179530    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:46:39.213737    9807 logs.go:123] Gathering logs for coredns [99dfd61d76ba] ...
	I1205 11:46:39.213747    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99dfd61d76ba"
	I1205 11:46:39.225751    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:46:39.225763    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:46:39.241660    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:46:39.241672    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:46:39.264901    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:46:39.264911    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:46:39.288361    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:46:39.288368    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:46:39.292542    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:46:39.292550    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:46:39.306177    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:46:39.306189    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:46:39.317893    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:46:39.317901    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:46:39.329707    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:46:39.329718    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:46:39.342474    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:39.342484    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:46:39.342509    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:46:39.342513    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:46:39.342517    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:46:39.342521    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:39.342524    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:46:49.346162    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:46:54.348509    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:46:54.348714    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:46:54.364864    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:46:54.364960    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:46:54.377376    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:46:54.377456    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:46:54.388234    9807 logs.go:282] 4 containers: [c3470a3fec9e 99dfd61d76ba c1a00b201d69 a169e0bf6832]
	I1205 11:46:54.388318    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:46:54.398567    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:46:54.398643    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:46:54.408680    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:46:54.408759    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:46:54.419648    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:46:54.419726    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:46:54.429623    9807 logs.go:282] 0 containers: []
	W1205 11:46:54.429634    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:46:54.429693    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:46:54.443845    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:46:54.443864    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:46:54.443871    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:46:54.455800    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:46:54.455812    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:46:54.490134    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:46:54.490228    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:46:54.491183    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:46:54.491191    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:46:54.505156    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:46:54.505169    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:46:54.530751    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:46:54.530759    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:46:54.534711    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:46:54.534718    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:46:54.550277    9807 logs.go:123] Gathering logs for coredns [c3470a3fec9e] ...
	I1205 11:46:54.550289    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3470a3fec9e"
	I1205 11:46:54.561699    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:46:54.561712    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:46:54.573468    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:46:54.573479    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:46:54.585334    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:46:54.585344    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:46:54.597351    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:46:54.597361    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:46:54.614830    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:46:54.614840    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:46:54.626504    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:46:54.626514    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:46:54.662286    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:46:54.662298    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:46:54.679195    9807 logs.go:123] Gathering logs for coredns [99dfd61d76ba] ...
	I1205 11:46:54.679204    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99dfd61d76ba"
	I1205 11:46:54.692334    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:54.692344    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:46:54.692370    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:46:54.692374    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:46:54.692377    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:46:54.692381    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:46:54.692383    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:47:04.696470    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:47:09.698222    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:47:09.698461    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:47:09.713638    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:47:09.713731    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:47:09.725703    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:47:09.725789    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:47:09.736793    9807 logs.go:282] 4 containers: [c3470a3fec9e 99dfd61d76ba c1a00b201d69 a169e0bf6832]
	I1205 11:47:09.736870    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:47:09.747308    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:47:09.747378    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:47:09.757934    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:47:09.758014    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:47:09.768810    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:47:09.768883    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:47:09.779378    9807 logs.go:282] 0 containers: []
	W1205 11:47:09.779391    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:47:09.779458    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:47:09.789881    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:47:09.789897    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:47:09.789905    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:47:09.824502    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:47:09.824596    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:47:09.825511    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:47:09.825520    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:47:09.840172    9807 logs.go:123] Gathering logs for coredns [c3470a3fec9e] ...
	I1205 11:47:09.840180    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3470a3fec9e"
	I1205 11:47:09.852234    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:47:09.852244    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:47:09.864938    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:47:09.864949    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:47:09.877211    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:47:09.877222    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:47:09.889363    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:47:09.889372    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:47:09.900546    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:47:09.900556    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:47:09.958266    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:47:09.958278    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:47:09.973925    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:47:09.973940    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:47:09.979886    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:47:09.979895    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:47:09.996984    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:47:09.996998    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:47:10.015496    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:47:10.015510    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:47:10.031853    9807 logs.go:123] Gathering logs for coredns [99dfd61d76ba] ...
	I1205 11:47:10.031864    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99dfd61d76ba"
	I1205 11:47:10.044345    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:47:10.044355    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:47:10.068566    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:10.068575    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:47:10.068602    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:47:10.068606    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:47:10.068611    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:47:10.068619    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:10.068622    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:47:20.072679    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:47:25.074912    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:47:25.075162    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:47:25.099081    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:47:25.099182    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:47:25.118058    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:47:25.118145    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:47:25.129675    9807 logs.go:282] 4 containers: [c3470a3fec9e 99dfd61d76ba c1a00b201d69 a169e0bf6832]
	I1205 11:47:25.129755    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:47:25.155408    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:47:25.155490    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:47:25.167364    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:47:25.167441    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:47:25.177880    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:47:25.177958    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:47:25.188003    9807 logs.go:282] 0 containers: []
	W1205 11:47:25.188014    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:47:25.188072    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:47:25.198257    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:47:25.198278    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:47:25.198284    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:47:25.230726    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:47:25.230820    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:47:25.231749    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:47:25.231755    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:47:25.246015    9807 logs.go:123] Gathering logs for coredns [c3470a3fec9e] ...
	I1205 11:47:25.246029    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3470a3fec9e"
	I1205 11:47:25.259533    9807 logs.go:123] Gathering logs for coredns [99dfd61d76ba] ...
	I1205 11:47:25.259545    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99dfd61d76ba"
	I1205 11:47:25.275190    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:47:25.275200    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:47:25.286594    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:47:25.286604    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:47:25.309982    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:47:25.309990    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:47:25.321739    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:47:25.321750    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:47:25.326261    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:47:25.326271    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:47:25.361358    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:47:25.361370    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:47:25.373458    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:47:25.373470    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:47:25.384910    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:47:25.384922    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:47:25.402351    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:47:25.402362    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:47:25.416385    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:47:25.416395    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:47:25.427830    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:47:25.427842    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:47:25.442864    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:25.442873    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:47:25.442911    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:47:25.442915    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:47:25.442919    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:47:25.442923    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:25.442926    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:47:35.446986    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:47:40.449273    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:47:40.449473    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:47:40.467765    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:47:40.467875    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:47:40.482824    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:47:40.482901    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:47:40.494454    9807 logs.go:282] 4 containers: [c3470a3fec9e 99dfd61d76ba c1a00b201d69 a169e0bf6832]
	I1205 11:47:40.494560    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:47:40.504903    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:47:40.504977    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:47:40.515518    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:47:40.515594    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:47:40.526110    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:47:40.526192    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:47:40.537082    9807 logs.go:282] 0 containers: []
	W1205 11:47:40.537094    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:47:40.537160    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:47:40.547970    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:47:40.547986    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:47:40.547992    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:47:40.563456    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:47:40.563466    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:47:40.575135    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:47:40.575144    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:47:40.579288    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:47:40.579298    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:47:40.614457    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:47:40.614471    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:47:40.630234    9807 logs.go:123] Gathering logs for coredns [c3470a3fec9e] ...
	I1205 11:47:40.630246    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3470a3fec9e"
	I1205 11:47:40.642057    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:47:40.642065    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:47:40.665670    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:47:40.665680    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:47:40.678269    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:47:40.678283    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:47:40.713694    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:47:40.713789    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:47:40.714755    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:47:40.714763    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:47:40.733167    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:47:40.733178    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:47:40.745441    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:47:40.745453    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:47:40.759504    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:47:40.759514    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:47:40.776751    9807 logs.go:123] Gathering logs for coredns [99dfd61d76ba] ...
	I1205 11:47:40.776761    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99dfd61d76ba"
	I1205 11:47:40.788239    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:47:40.788250    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:47:40.800190    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:40.800200    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:47:40.800228    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:47:40.800232    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:47:40.800235    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:47:40.800238    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:40.800241    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:47:50.804247    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:47:55.806597    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:47:55.807080    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:47:55.840697    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:47:55.840869    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:47:55.859757    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:47:55.859850    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:47:55.873865    9807 logs.go:282] 4 containers: [c3470a3fec9e 99dfd61d76ba c1a00b201d69 a169e0bf6832]
	I1205 11:47:55.873952    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:47:55.886531    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:47:55.886603    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:47:55.897409    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:47:55.897491    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:47:55.908829    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:47:55.908904    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:47:55.919409    9807 logs.go:282] 0 containers: []
	W1205 11:47:55.919422    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:47:55.919489    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:47:55.930153    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:47:55.930168    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:47:55.930173    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:47:55.945494    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:47:55.945508    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:47:55.970801    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:47:55.970809    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:47:55.983161    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:47:55.983171    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:47:55.988007    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:47:55.988016    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:47:56.000431    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:47:56.000441    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:47:56.015681    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:47:56.015691    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:47:56.027652    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:47:56.027663    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:47:56.060911    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:47:56.061005    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:47:56.061941    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:47:56.061948    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:47:56.073626    9807 logs.go:123] Gathering logs for coredns [c3470a3fec9e] ...
	I1205 11:47:56.073636    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3470a3fec9e"
	I1205 11:47:56.085531    9807 logs.go:123] Gathering logs for coredns [99dfd61d76ba] ...
	I1205 11:47:56.085541    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99dfd61d76ba"
	I1205 11:47:56.096969    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:47:56.096980    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:47:56.117654    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:47:56.117664    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:47:56.129295    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:47:56.129307    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:47:56.167191    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:47:56.167201    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:47:56.181824    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:56.181833    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:47:56.181860    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:47:56.181864    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:47:56.181867    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:47:56.181871    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:47:56.181874    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:48:06.185917    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:48:11.188175    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:48:11.188700    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:48:11.225467    9807 logs.go:282] 1 containers: [a8d82f04fe83]
	I1205 11:48:11.225630    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:48:11.244859    9807 logs.go:282] 1 containers: [0c278809193a]
	I1205 11:48:11.244968    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:48:11.263035    9807 logs.go:282] 4 containers: [c3470a3fec9e 99dfd61d76ba c1a00b201d69 a169e0bf6832]
	I1205 11:48:11.263129    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:48:11.275407    9807 logs.go:282] 1 containers: [3b252249eade]
	I1205 11:48:11.275484    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:48:11.286167    9807 logs.go:282] 1 containers: [962c1afeedc6]
	I1205 11:48:11.286242    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:48:11.298430    9807 logs.go:282] 1 containers: [7106ebea5e7b]
	I1205 11:48:11.298507    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:48:11.308775    9807 logs.go:282] 0 containers: []
	W1205 11:48:11.308784    9807 logs.go:284] No container was found matching "kindnet"
	I1205 11:48:11.308856    9807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:48:11.321162    9807 logs.go:282] 1 containers: [a8558d684218]
	I1205 11:48:11.321179    9807 logs.go:123] Gathering logs for dmesg ...
	I1205 11:48:11.321185    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:48:11.325616    9807 logs.go:123] Gathering logs for kube-scheduler [3b252249eade] ...
	I1205 11:48:11.325625    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b252249eade"
	I1205 11:48:11.341320    9807 logs.go:123] Gathering logs for kube-proxy [962c1afeedc6] ...
	I1205 11:48:11.341334    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 962c1afeedc6"
	I1205 11:48:11.367711    9807 logs.go:123] Gathering logs for kube-controller-manager [7106ebea5e7b] ...
	I1205 11:48:11.367728    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7106ebea5e7b"
	I1205 11:48:11.398624    9807 logs.go:123] Gathering logs for storage-provisioner [a8558d684218] ...
	I1205 11:48:11.398643    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8558d684218"
	I1205 11:48:11.412187    9807 logs.go:123] Gathering logs for kube-apiserver [a8d82f04fe83] ...
	I1205 11:48:11.412202    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8d82f04fe83"
	I1205 11:48:11.427351    9807 logs.go:123] Gathering logs for coredns [c3470a3fec9e] ...
	I1205 11:48:11.427364    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3470a3fec9e"
	I1205 11:48:11.439388    9807 logs.go:123] Gathering logs for coredns [c1a00b201d69] ...
	I1205 11:48:11.439401    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a00b201d69"
	I1205 11:48:11.451914    9807 logs.go:123] Gathering logs for Docker ...
	I1205 11:48:11.451924    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:48:11.476361    9807 logs.go:123] Gathering logs for kubelet ...
	I1205 11:48:11.476370    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 11:48:11.509554    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:48:11.509647    9807 logs.go:138] Found kubelet problem: Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:48:11.510598    9807 logs.go:123] Gathering logs for etcd [0c278809193a] ...
	I1205 11:48:11.510604    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c278809193a"
	I1205 11:48:11.524891    9807 logs.go:123] Gathering logs for coredns [99dfd61d76ba] ...
	I1205 11:48:11.524901    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99dfd61d76ba"
	I1205 11:48:11.537249    9807 logs.go:123] Gathering logs for coredns [a169e0bf6832] ...
	I1205 11:48:11.537260    9807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a169e0bf6832"
	I1205 11:48:11.549344    9807 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:48:11.549356    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:48:11.583165    9807 logs.go:123] Gathering logs for container status ...
	I1205 11:48:11.583177    9807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:48:11.595230    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:48:11.595241    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 11:48:11.595273    9807 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 11:48:11.595278    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: W1205 19:44:34.919346    9709 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	W1205 11:48:11.595282    9807 out.go:270]   Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	  Dec 05 19:44:34 stopped-upgrade-050000 kubelet[9709]: E1205 19:44:34.919365    9709 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-050000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-050000' and this object
	I1205 11:48:11.595285    9807 out.go:358] Setting ErrFile to fd 2...
	I1205 11:48:11.595288    9807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:48:21.599328    9807 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:48:26.600588    9807 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:48:26.605172    9807 out.go:201] 
	W1205 11:48:26.609090    9807 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1205 11:48:26.609100    9807 out.go:270] * 
	* 
	W1205 11:48:26.610222    9807 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:48:26.620073    9807 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-050000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (592.11s)

                                                
                                    
x
+
TestPause/serial/Start (10.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-676000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-676000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.150355375s)

                                                
                                                
-- stdout --
	* [pause-676000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-676000" primary control-plane node in "pause-676000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-676000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-676000 -n pause-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-676000 -n pause-676000: exit status 7 (66.073875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-344000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-344000 --driver=qemu2 : exit status 80 (9.80313475s)

                                                
                                                
-- stdout --
	* [NoKubernetes-344000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-344000" primary control-plane node in "NoKubernetes-344000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-344000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-344000 -n NoKubernetes-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-344000 -n NoKubernetes-344000: exit status 7 (58.260209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-344000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-344000 --no-kubernetes --driver=qemu2 : exit status 80 (5.917983s)

                                                
                                                
-- stdout --
	* [NoKubernetes-344000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-344000
	* Restarting existing qemu2 VM for "NoKubernetes-344000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-344000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-344000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-344000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-344000 -n NoKubernetes-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-344000 -n NoKubernetes-344000: exit status 7 (66.997875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-344000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-344000 --no-kubernetes --driver=qemu2 : exit status 80 (7.322571208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-344000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-344000
	* Restarting existing qemu2 VM for "NoKubernetes-344000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-344000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-344000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-344000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-344000 -n NoKubernetes-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-344000 -n NoKubernetes-344000: exit status 7 (60.257958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.87s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.87s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.26s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20053
- KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current128441929/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-344000 --driver=qemu2 
W1205 11:49:39.858563    7922 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1205 11:49:39.858776    7922 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1205 11:49:39.858831    7922 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/001/docker-machine-driver-hyperkit
I1205 11:49:40.354528    7922 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x107bc16e0 0x107bc16e0 0x107bc16e0 0x107bc16e0 0x107bc16e0 0x107bc16e0 0x107bc16e0] Decompressors:map[bz2:0x1400081a7c8 gz:0x1400081a980 tar:0x1400081a890 tar.bz2:0x1400081a900 tar.gz:0x1400081a930 tar.xz:0x1400081a940 tar.zst:0x1400081a950 tbz2:0x1400081a900 tgz:0x1400081a930 txz:0x1400081a940 tzst:0x1400081a950 xz:0x1400081a988 zip:0x1400081a9e0 zst:0x1400081a9f0] Getters:map[file:0x140058b62c0 http:0x14000bae500 https:0x14000bae550] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1205 11:49:40.354656    7922 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate3948961459/001/docker-machine-driver-hyperkit
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-344000 --driver=qemu2 : exit status 80 (5.286205084s)

                                                
                                                
-- stdout --
	* [NoKubernetes-344000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-344000
	* Restarting existing qemu2 VM for "NoKubernetes-344000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-344000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-344000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-344000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-344000 -n NoKubernetes-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-344000 -n NoKubernetes-344000: exit status 7 (71.670458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.748204583s)

                                                
                                                
-- stdout --
	* [auto-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-907000" primary control-plane node in "auto-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:50:14.327485   10271 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:50:14.327639   10271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:50:14.327643   10271 out.go:358] Setting ErrFile to fd 2...
	I1205 11:50:14.327646   10271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:50:14.327781   10271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:50:14.328941   10271 out.go:352] Setting JSON to false
	I1205 11:50:14.346426   10271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6583,"bootTime":1733421631,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:50:14.346500   10271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:50:14.352520   10271 out.go:177] * [auto-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:50:14.359415   10271 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:50:14.359453   10271 notify.go:220] Checking for updates...
	I1205 11:50:14.366347   10271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:50:14.369376   10271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:50:14.372355   10271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:50:14.375397   10271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:50:14.378375   10271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:50:14.381708   10271 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:50:14.381785   10271 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:50:14.381848   10271 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:50:14.386333   10271 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:50:14.392331   10271 start.go:297] selected driver: qemu2
	I1205 11:50:14.392338   10271 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:50:14.392348   10271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:50:14.394827   10271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:50:14.398342   10271 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:50:14.401414   10271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:50:14.401430   10271 cni.go:84] Creating CNI manager for ""
	I1205 11:50:14.401451   10271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:50:14.401456   10271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:50:14.401489   10271 start.go:340] cluster config:
	{Name:auto-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:50:14.406047   10271 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:50:14.413360   10271 out.go:177] * Starting "auto-907000" primary control-plane node in "auto-907000" cluster
	I1205 11:50:14.417385   10271 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:50:14.417406   10271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:50:14.417417   10271 cache.go:56] Caching tarball of preloaded images
	I1205 11:50:14.417499   10271 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:50:14.417505   10271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:50:14.417567   10271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/auto-907000/config.json ...
	I1205 11:50:14.417578   10271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/auto-907000/config.json: {Name:mkca7ccff83bca9627d009ba839a6a7fbea16005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:50:14.417856   10271 start.go:360] acquireMachinesLock for auto-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:14.417912   10271 start.go:364] duration metric: took 50.75µs to acquireMachinesLock for "auto-907000"
	I1205 11:50:14.417924   10271 start.go:93] Provisioning new machine with config: &{Name:auto-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:14.417951   10271 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:14.426386   10271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:50:14.444718   10271 start.go:159] libmachine.API.Create for "auto-907000" (driver="qemu2")
	I1205 11:50:14.444744   10271 client.go:168] LocalClient.Create starting
	I1205 11:50:14.444818   10271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:14.444857   10271 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:14.444870   10271 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:14.444913   10271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:14.444944   10271 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:14.444958   10271 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:14.445392   10271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:14.603812   10271 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:14.643674   10271 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:14.643679   10271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:14.643867   10271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2
	I1205 11:50:14.653876   10271 main.go:141] libmachine: STDOUT: 
	I1205 11:50:14.653897   10271 main.go:141] libmachine: STDERR: 
	I1205 11:50:14.653949   10271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2 +20000M
	I1205 11:50:14.662437   10271 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:14.662452   10271 main.go:141] libmachine: STDERR: 
	I1205 11:50:14.662464   10271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2
	I1205 11:50:14.662469   10271 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:14.662480   10271 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:14.662527   10271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b9:c6:0f:3a:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2
	I1205 11:50:14.664281   10271 main.go:141] libmachine: STDOUT: 
	I1205 11:50:14.664294   10271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:14.664321   10271 client.go:171] duration metric: took 219.565709ms to LocalClient.Create
	I1205 11:50:16.666481   10271 start.go:128] duration metric: took 2.248528959s to createHost
	I1205 11:50:16.666545   10271 start.go:83] releasing machines lock for "auto-907000", held for 2.248643875s
	W1205 11:50:16.666590   10271 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:16.682421   10271 out.go:177] * Deleting "auto-907000" in qemu2 ...
	W1205 11:50:16.707330   10271 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:16.707354   10271 start.go:729] Will try again in 5 seconds ...
	I1205 11:50:21.709512   10271 start.go:360] acquireMachinesLock for auto-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:21.710052   10271 start.go:364] duration metric: took 450.167µs to acquireMachinesLock for "auto-907000"
	I1205 11:50:21.710168   10271 start.go:93] Provisioning new machine with config: &{Name:auto-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:21.710450   10271 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:21.724088   10271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:50:21.775058   10271 start.go:159] libmachine.API.Create for "auto-907000" (driver="qemu2")
	I1205 11:50:21.775129   10271 client.go:168] LocalClient.Create starting
	I1205 11:50:21.775255   10271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:21.775337   10271 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:21.775354   10271 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:21.775425   10271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:21.775484   10271 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:21.775497   10271 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:21.776035   10271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:21.943772   10271 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:21.974711   10271 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:21.974716   10271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:21.974903   10271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2
	I1205 11:50:21.985034   10271 main.go:141] libmachine: STDOUT: 
	I1205 11:50:21.985052   10271 main.go:141] libmachine: STDERR: 
	I1205 11:50:21.985102   10271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2 +20000M
	I1205 11:50:21.993721   10271 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:21.993746   10271 main.go:141] libmachine: STDERR: 
	I1205 11:50:21.993761   10271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2
	I1205 11:50:21.993766   10271 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:21.993775   10271 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:21.993814   10271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:eb:f2:d3:c1:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/auto-907000/disk.qcow2
	I1205 11:50:21.995680   10271 main.go:141] libmachine: STDOUT: 
	I1205 11:50:21.995693   10271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:21.995706   10271 client.go:171] duration metric: took 220.57325ms to LocalClient.Create
	I1205 11:50:23.997870   10271 start.go:128] duration metric: took 2.2874095s to createHost
	I1205 11:50:23.997943   10271 start.go:83] releasing machines lock for "auto-907000", held for 2.287890083s
	W1205 11:50:23.998410   10271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:24.012033   10271 out.go:201] 
	W1205 11:50:24.016200   10271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:50:24.016284   10271 out.go:270] * 
	* 
	W1205 11:50:24.018880   10271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:50:24.028948   10271 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.931438917s)

                                                
                                                
-- stdout --
	* [kindnet-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-907000" primary control-plane node in "kindnet-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:50:26.428301   10380 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:50:26.428709   10380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:50:26.428715   10380 out.go:358] Setting ErrFile to fd 2...
	I1205 11:50:26.428718   10380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:50:26.428911   10380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:50:26.430522   10380 out.go:352] Setting JSON to false
	I1205 11:50:26.448565   10380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6595,"bootTime":1733421631,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:50:26.448655   10380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:50:26.454864   10380 out.go:177] * [kindnet-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:50:26.460875   10380 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:50:26.460887   10380 notify.go:220] Checking for updates...
	I1205 11:50:26.467789   10380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:50:26.470795   10380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:50:26.473851   10380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:50:26.476762   10380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:50:26.479804   10380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:50:26.483222   10380 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:50:26.483301   10380 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:50:26.483348   10380 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:50:26.487742   10380 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:50:26.494926   10380 start.go:297] selected driver: qemu2
	I1205 11:50:26.494934   10380 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:50:26.494943   10380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:50:26.497523   10380 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:50:26.500750   10380 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:50:26.503871   10380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:50:26.503894   10380 cni.go:84] Creating CNI manager for "kindnet"
	I1205 11:50:26.503902   10380 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 11:50:26.503940   10380 start.go:340] cluster config:
	{Name:kindnet-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:50:26.508632   10380 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:50:26.515802   10380 out.go:177] * Starting "kindnet-907000" primary control-plane node in "kindnet-907000" cluster
	I1205 11:50:26.519829   10380 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:50:26.519846   10380 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:50:26.519860   10380 cache.go:56] Caching tarball of preloaded images
	I1205 11:50:26.519954   10380 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:50:26.519960   10380 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:50:26.520032   10380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/kindnet-907000/config.json ...
	I1205 11:50:26.520043   10380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/kindnet-907000/config.json: {Name:mk0b898ad0f0ca6b42d45ff5aad952567293ce18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:50:26.520412   10380 start.go:360] acquireMachinesLock for kindnet-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:26.520467   10380 start.go:364] duration metric: took 44.959µs to acquireMachinesLock for "kindnet-907000"
	I1205 11:50:26.520479   10380 start.go:93] Provisioning new machine with config: &{Name:kindnet-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:26.520513   10380 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:26.527843   10380 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:50:26.546131   10380 start.go:159] libmachine.API.Create for "kindnet-907000" (driver="qemu2")
	I1205 11:50:26.546155   10380 client.go:168] LocalClient.Create starting
	I1205 11:50:26.546225   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:26.546261   10380 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:26.546275   10380 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:26.546310   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:26.546341   10380 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:26.546350   10380 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:26.546784   10380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:26.703024   10380 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:26.901217   10380 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:26.901223   10380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:26.901457   10380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2
	I1205 11:50:26.912041   10380 main.go:141] libmachine: STDOUT: 
	I1205 11:50:26.912066   10380 main.go:141] libmachine: STDERR: 
	I1205 11:50:26.912129   10380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2 +20000M
	I1205 11:50:26.920819   10380 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:26.920840   10380 main.go:141] libmachine: STDERR: 
	I1205 11:50:26.920857   10380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2
	I1205 11:50:26.920867   10380 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:26.920877   10380 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:26.920911   10380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:9a:84:01:3d:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2
	I1205 11:50:26.922714   10380 main.go:141] libmachine: STDOUT: 
	I1205 11:50:26.922728   10380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:26.922747   10380 client.go:171] duration metric: took 376.590959ms to LocalClient.Create
	I1205 11:50:28.924944   10380 start.go:128] duration metric: took 2.40441975s to createHost
	I1205 11:50:28.925044   10380 start.go:83] releasing machines lock for "kindnet-907000", held for 2.404588875s
	W1205 11:50:28.925104   10380 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:28.942443   10380 out.go:177] * Deleting "kindnet-907000" in qemu2 ...
	W1205 11:50:28.968099   10380 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:28.968141   10380 start.go:729] Will try again in 5 seconds ...
	I1205 11:50:33.970330   10380 start.go:360] acquireMachinesLock for kindnet-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:33.970869   10380 start.go:364] duration metric: took 426.833µs to acquireMachinesLock for "kindnet-907000"
	I1205 11:50:33.971012   10380 start.go:93] Provisioning new machine with config: &{Name:kindnet-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:33.971332   10380 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:33.987224   10380 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:50:34.040142   10380 start.go:159] libmachine.API.Create for "kindnet-907000" (driver="qemu2")
	I1205 11:50:34.040189   10380 client.go:168] LocalClient.Create starting
	I1205 11:50:34.040326   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:34.040407   10380 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:34.040427   10380 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:34.040490   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:34.040548   10380 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:34.040559   10380 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:34.041266   10380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:34.208731   10380 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:34.262939   10380 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:34.262945   10380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:34.263137   10380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2
	I1205 11:50:34.273508   10380 main.go:141] libmachine: STDOUT: 
	I1205 11:50:34.273527   10380 main.go:141] libmachine: STDERR: 
	I1205 11:50:34.273589   10380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2 +20000M
	I1205 11:50:34.282398   10380 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:34.282415   10380 main.go:141] libmachine: STDERR: 
	I1205 11:50:34.282434   10380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2
	I1205 11:50:34.282440   10380 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:34.282448   10380 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:34.282481   10380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:0c:cc:44:58:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kindnet-907000/disk.qcow2
	I1205 11:50:34.284297   10380 main.go:141] libmachine: STDOUT: 
	I1205 11:50:34.284311   10380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:34.284324   10380 client.go:171] duration metric: took 244.130667ms to LocalClient.Create
	I1205 11:50:36.286475   10380 start.go:128] duration metric: took 2.315139083s to createHost
	I1205 11:50:36.286644   10380 start.go:83] releasing machines lock for "kindnet-907000", held for 2.315694833s
	W1205 11:50:36.287158   10380 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:36.297766   10380 out.go:201] 
	W1205 11:50:36.302709   10380 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:50:36.302738   10380 out.go:270] * 
	* 
	W1205 11:50:36.305571   10380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:50:36.313718   10380 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.804741916s)

                                                
                                                
-- stdout --
	* [calico-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-907000" primary control-plane node in "calico-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:50:38.793798   10497 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:50:38.793955   10497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:50:38.793959   10497 out.go:358] Setting ErrFile to fd 2...
	I1205 11:50:38.793961   10497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:50:38.794091   10497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:50:38.795223   10497 out.go:352] Setting JSON to false
	I1205 11:50:38.812883   10497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6607,"bootTime":1733421631,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:50:38.812973   10497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:50:38.818352   10497 out.go:177] * [calico-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:50:38.826224   10497 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:50:38.826270   10497 notify.go:220] Checking for updates...
	I1205 11:50:38.833165   10497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:50:38.836165   10497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:50:38.839158   10497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:50:38.842142   10497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:50:38.845187   10497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:50:38.848562   10497 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:50:38.848641   10497 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:50:38.848687   10497 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:50:38.853165   10497 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:50:38.859156   10497 start.go:297] selected driver: qemu2
	I1205 11:50:38.859164   10497 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:50:38.859171   10497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:50:38.861749   10497 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:50:38.865153   10497 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:50:38.868288   10497 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:50:38.868308   10497 cni.go:84] Creating CNI manager for "calico"
	I1205 11:50:38.868323   10497 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1205 11:50:38.868359   10497 start.go:340] cluster config:
	{Name:calico-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:50:38.873023   10497 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:50:38.880105   10497 out.go:177] * Starting "calico-907000" primary control-plane node in "calico-907000" cluster
	I1205 11:50:38.884172   10497 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:50:38.884186   10497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:50:38.884197   10497 cache.go:56] Caching tarball of preloaded images
	I1205 11:50:38.884264   10497 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:50:38.884270   10497 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:50:38.884322   10497 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/calico-907000/config.json ...
	I1205 11:50:38.884332   10497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/calico-907000/config.json: {Name:mk19168ee8cf0bf8d6ecaee3b3fa9cec16f26c1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:50:38.884663   10497 start.go:360] acquireMachinesLock for calico-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:38.884711   10497 start.go:364] duration metric: took 41.542µs to acquireMachinesLock for "calico-907000"
	I1205 11:50:38.884722   10497 start.go:93] Provisioning new machine with config: &{Name:calico-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:38.884750   10497 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:38.892222   10497 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:50:38.909912   10497 start.go:159] libmachine.API.Create for "calico-907000" (driver="qemu2")
	I1205 11:50:38.909941   10497 client.go:168] LocalClient.Create starting
	I1205 11:50:38.910008   10497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:38.910042   10497 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:38.910057   10497 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:38.910096   10497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:38.910124   10497 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:38.910136   10497 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:38.910548   10497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:39.066270   10497 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:39.135603   10497 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:39.135609   10497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:39.135805   10497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2
	I1205 11:50:39.145961   10497 main.go:141] libmachine: STDOUT: 
	I1205 11:50:39.145977   10497 main.go:141] libmachine: STDERR: 
	I1205 11:50:39.146033   10497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2 +20000M
	I1205 11:50:39.154643   10497 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:39.154656   10497 main.go:141] libmachine: STDERR: 
	I1205 11:50:39.154675   10497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2
	I1205 11:50:39.154680   10497 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:39.154691   10497 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:39.154728   10497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:72:fa:c5:4c:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2
	I1205 11:50:39.156575   10497 main.go:141] libmachine: STDOUT: 
	I1205 11:50:39.156590   10497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:39.156609   10497 client.go:171] duration metric: took 246.6645ms to LocalClient.Create
	I1205 11:50:41.158843   10497 start.go:128] duration metric: took 2.274093292s to createHost
	I1205 11:50:41.158905   10497 start.go:83] releasing machines lock for "calico-907000", held for 2.274203167s
	W1205 11:50:41.158952   10497 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:41.173093   10497 out.go:177] * Deleting "calico-907000" in qemu2 ...
	W1205 11:50:41.200375   10497 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:41.200406   10497 start.go:729] Will try again in 5 seconds ...
	I1205 11:50:46.202597   10497 start.go:360] acquireMachinesLock for calico-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:46.203171   10497 start.go:364] duration metric: took 477.541µs to acquireMachinesLock for "calico-907000"
	I1205 11:50:46.203291   10497 start.go:93] Provisioning new machine with config: &{Name:calico-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:46.203520   10497 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:46.217254   10497 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:50:46.267259   10497 start.go:159] libmachine.API.Create for "calico-907000" (driver="qemu2")
	I1205 11:50:46.267318   10497 client.go:168] LocalClient.Create starting
	I1205 11:50:46.267455   10497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:46.267543   10497 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:46.267561   10497 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:46.267645   10497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:46.267704   10497 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:46.267722   10497 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:46.268318   10497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:46.437808   10497 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:46.497545   10497 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:46.497553   10497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:46.497752   10497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2
	I1205 11:50:46.507971   10497 main.go:141] libmachine: STDOUT: 
	I1205 11:50:46.507991   10497 main.go:141] libmachine: STDERR: 
	I1205 11:50:46.508054   10497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2 +20000M
	I1205 11:50:46.516899   10497 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:46.516921   10497 main.go:141] libmachine: STDERR: 
	I1205 11:50:46.516934   10497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2
	I1205 11:50:46.516939   10497 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:46.516947   10497 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:46.516983   10497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:58:61:9c:4b:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/calico-907000/disk.qcow2
	I1205 11:50:46.518796   10497 main.go:141] libmachine: STDOUT: 
	I1205 11:50:46.518815   10497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:46.518835   10497 client.go:171] duration metric: took 251.51425ms to LocalClient.Create
	I1205 11:50:48.521026   10497 start.go:128] duration metric: took 2.317488875s to createHost
	I1205 11:50:48.521093   10497 start.go:83] releasing machines lock for "calico-907000", held for 2.317918959s
	W1205 11:50:48.521426   10497 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:48.535367   10497 out.go:201] 
	W1205 11:50:48.538576   10497 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:50:48.538604   10497 out.go:270] * 
	* 
	W1205 11:50:48.541421   10497 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:50:48.552358   10497 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.881015625s)

                                                
                                                
-- stdout --
	* [custom-flannel-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-907000" primary control-plane node in "custom-flannel-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:50:51.152109   10617 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:50:51.152266   10617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:50:51.152269   10617 out.go:358] Setting ErrFile to fd 2...
	I1205 11:50:51.152272   10617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:50:51.152392   10617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:50:51.153523   10617 out.go:352] Setting JSON to false
	I1205 11:50:51.171526   10617 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6620,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:50:51.171604   10617 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:50:51.178140   10617 out.go:177] * [custom-flannel-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:50:51.184964   10617 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:50:51.185019   10617 notify.go:220] Checking for updates...
	I1205 11:50:51.192079   10617 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:50:51.193518   10617 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:50:51.197063   10617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:50:51.200085   10617 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:50:51.203194   10617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:50:51.206401   10617 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:50:51.206481   10617 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:50:51.206524   10617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:50:51.211048   10617 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:50:51.218097   10617 start.go:297] selected driver: qemu2
	I1205 11:50:51.218103   10617 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:50:51.218111   10617 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:50:51.220634   10617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:50:51.223009   10617 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:50:51.226198   10617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:50:51.226228   10617 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1205 11:50:51.226240   10617 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1205 11:50:51.226277   10617 start.go:340] cluster config:
	{Name:custom-flannel-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:50:51.230906   10617 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:50:51.238076   10617 out.go:177] * Starting "custom-flannel-907000" primary control-plane node in "custom-flannel-907000" cluster
	I1205 11:50:51.241041   10617 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:50:51.241055   10617 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:50:51.241063   10617 cache.go:56] Caching tarball of preloaded images
	I1205 11:50:51.241132   10617 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:50:51.241137   10617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:50:51.241187   10617 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/custom-flannel-907000/config.json ...
	I1205 11:50:51.241199   10617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/custom-flannel-907000/config.json: {Name:mk4931caac960d392ead7696493c367e02ef9221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:50:51.241524   10617 start.go:360] acquireMachinesLock for custom-flannel-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:51.241574   10617 start.go:364] duration metric: took 42.292µs to acquireMachinesLock for "custom-flannel-907000"
	I1205 11:50:51.241585   10617 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:51.241614   10617 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:51.249949   10617 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:50:51.267174   10617 start.go:159] libmachine.API.Create for "custom-flannel-907000" (driver="qemu2")
	I1205 11:50:51.267201   10617 client.go:168] LocalClient.Create starting
	I1205 11:50:51.267277   10617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:51.267313   10617 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:51.267328   10617 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:51.267364   10617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:51.267392   10617 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:51.267398   10617 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:51.267806   10617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:51.423796   10617 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:51.586230   10617 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:51.586238   10617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:51.586461   10617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2
	I1205 11:50:51.596818   10617 main.go:141] libmachine: STDOUT: 
	I1205 11:50:51.596839   10617 main.go:141] libmachine: STDERR: 
	I1205 11:50:51.596894   10617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2 +20000M
	I1205 11:50:51.605445   10617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:51.605460   10617 main.go:141] libmachine: STDERR: 
	I1205 11:50:51.605483   10617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2
	I1205 11:50:51.605489   10617 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:51.605500   10617 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:51.605529   10617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:58:b1:ce:19:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2
	I1205 11:50:51.607321   10617 main.go:141] libmachine: STDOUT: 
	I1205 11:50:51.607338   10617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:51.607360   10617 client.go:171] duration metric: took 340.155791ms to LocalClient.Create
	I1205 11:50:53.609536   10617 start.go:128] duration metric: took 2.367921584s to createHost
	I1205 11:50:53.609605   10617 start.go:83] releasing machines lock for "custom-flannel-907000", held for 2.368043584s
	W1205 11:50:53.609651   10617 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:53.620826   10617 out.go:177] * Deleting "custom-flannel-907000" in qemu2 ...
	W1205 11:50:53.648044   10617 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:50:53.648076   10617 start.go:729] Will try again in 5 seconds ...
	I1205 11:50:58.648586   10617 start.go:360] acquireMachinesLock for custom-flannel-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:50:58.649140   10617 start.go:364] duration metric: took 423.708µs to acquireMachinesLock for "custom-flannel-907000"
	I1205 11:50:58.649258   10617 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:50:58.649527   10617 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:50:58.663325   10617 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:50:58.713703   10617 start.go:159] libmachine.API.Create for "custom-flannel-907000" (driver="qemu2")
	I1205 11:50:58.713755   10617 client.go:168] LocalClient.Create starting
	I1205 11:50:58.713915   10617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:50:58.713995   10617 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:58.714012   10617 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:58.714091   10617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:50:58.714149   10617 main.go:141] libmachine: Decoding PEM data...
	I1205 11:50:58.714161   10617 main.go:141] libmachine: Parsing certificate...
	I1205 11:50:58.714703   10617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:50:58.881911   10617 main.go:141] libmachine: Creating SSH key...
	I1205 11:50:58.941711   10617 main.go:141] libmachine: Creating Disk image...
	I1205 11:50:58.941719   10617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:50:58.941912   10617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2
	I1205 11:50:58.952171   10617 main.go:141] libmachine: STDOUT: 
	I1205 11:50:58.952189   10617 main.go:141] libmachine: STDERR: 
	I1205 11:50:58.952243   10617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2 +20000M
	I1205 11:50:58.960984   10617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:50:58.960998   10617 main.go:141] libmachine: STDERR: 
	I1205 11:50:58.961010   10617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2
	I1205 11:50:58.961015   10617 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:50:58.961024   10617 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:50:58.961057   10617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:c8:d6:9e:7f:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/custom-flannel-907000/disk.qcow2
	I1205 11:50:58.962831   10617 main.go:141] libmachine: STDOUT: 
	I1205 11:50:58.962846   10617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:50:58.962861   10617 client.go:171] duration metric: took 249.102083ms to LocalClient.Create
	I1205 11:51:00.963559   10617 start.go:128] duration metric: took 2.314015083s to createHost
	I1205 11:51:00.963636   10617 start.go:83] releasing machines lock for "custom-flannel-907000", held for 2.314492834s
	W1205 11:51:00.963911   10617 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:00.974395   10617 out.go:201] 
	W1205 11:51:00.977453   10617 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:51:00.977492   10617 out.go:270] * 
	* 
	W1205 11:51:00.978838   10617 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:51:00.986464   10617 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.017355542s)

                                                
                                                
-- stdout --
	* [false-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-907000" primary control-plane node in "false-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:51:03.565390   10734 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:51:03.565543   10734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:03.565546   10734 out.go:358] Setting ErrFile to fd 2...
	I1205 11:51:03.565549   10734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:03.565676   10734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:51:03.566880   10734 out.go:352] Setting JSON to false
	I1205 11:51:03.584591   10734 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6632,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:51:03.584658   10734 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:51:03.590401   10734 out.go:177] * [false-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:51:03.597270   10734 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:51:03.597321   10734 notify.go:220] Checking for updates...
	I1205 11:51:03.604371   10734 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:51:03.605816   10734 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:51:03.609340   10734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:51:03.612399   10734 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:51:03.615407   10734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:51:03.618704   10734 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:03.618787   10734 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:03.618853   10734 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:51:03.623353   10734 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:51:03.630364   10734 start.go:297] selected driver: qemu2
	I1205 11:51:03.630372   10734 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:51:03.630382   10734 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:51:03.633016   10734 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:51:03.636382   10734 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:51:03.639525   10734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:51:03.639548   10734 cni.go:84] Creating CNI manager for "false"
	I1205 11:51:03.639578   10734 start.go:340] cluster config:
	{Name:false-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:51:03.644259   10734 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:51:03.651382   10734 out.go:177] * Starting "false-907000" primary control-plane node in "false-907000" cluster
	I1205 11:51:03.654311   10734 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:51:03.654325   10734 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:51:03.654333   10734 cache.go:56] Caching tarball of preloaded images
	I1205 11:51:03.654406   10734 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:51:03.654412   10734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:51:03.654479   10734 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/false-907000/config.json ...
	I1205 11:51:03.654491   10734 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/false-907000/config.json: {Name:mkefe103b6953c6be3f0cc0bfc18cf0fbe52c1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:51:03.654851   10734 start.go:360] acquireMachinesLock for false-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:03.654902   10734 start.go:364] duration metric: took 44.667µs to acquireMachinesLock for "false-907000"
	I1205 11:51:03.654915   10734 start.go:93] Provisioning new machine with config: &{Name:false-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:03.654946   10734 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:03.662281   10734 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:03.680776   10734 start.go:159] libmachine.API.Create for "false-907000" (driver="qemu2")
	I1205 11:51:03.680815   10734 client.go:168] LocalClient.Create starting
	I1205 11:51:03.680893   10734 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:03.680930   10734 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:03.680942   10734 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:03.680980   10734 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:03.681010   10734 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:03.681018   10734 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:03.681459   10734 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:03.840566   10734 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:03.879321   10734 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:03.879326   10734 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:03.879513   10734 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2
	I1205 11:51:03.889492   10734 main.go:141] libmachine: STDOUT: 
	I1205 11:51:03.889508   10734 main.go:141] libmachine: STDERR: 
	I1205 11:51:03.889566   10734 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2 +20000M
	I1205 11:51:03.898096   10734 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:03.898110   10734 main.go:141] libmachine: STDERR: 
	I1205 11:51:03.898130   10734 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2
	I1205 11:51:03.898136   10734 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:03.898152   10734 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:03.898212   10734 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:4c:6c:35:83:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2
	I1205 11:51:03.900032   10734 main.go:141] libmachine: STDOUT: 
	I1205 11:51:03.900050   10734 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:03.900068   10734 client.go:171] duration metric: took 219.249166ms to LocalClient.Create
	I1205 11:51:05.902299   10734 start.go:128] duration metric: took 2.247354083s to createHost
	I1205 11:51:05.902348   10734 start.go:83] releasing machines lock for "false-907000", held for 2.24745575s
	W1205 11:51:05.902401   10734 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:05.914558   10734 out.go:177] * Deleting "false-907000" in qemu2 ...
	W1205 11:51:05.941948   10734 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:05.941979   10734 start.go:729] Will try again in 5 seconds ...
	I1205 11:51:10.944183   10734 start.go:360] acquireMachinesLock for false-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:10.944684   10734 start.go:364] duration metric: took 418.167µs to acquireMachinesLock for "false-907000"
	I1205 11:51:10.944801   10734 start.go:93] Provisioning new machine with config: &{Name:false-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:10.945158   10734 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:10.959981   10734 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:11.011238   10734 start.go:159] libmachine.API.Create for "false-907000" (driver="qemu2")
	I1205 11:51:11.011289   10734 client.go:168] LocalClient.Create starting
	I1205 11:51:11.011432   10734 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:11.011517   10734 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:11.011536   10734 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:11.011595   10734 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:11.011653   10734 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:11.011665   10734 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:11.012209   10734 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:11.181992   10734 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:11.477023   10734 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:11.477036   10734 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:11.477319   10734 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2
	I1205 11:51:11.488144   10734 main.go:141] libmachine: STDOUT: 
	I1205 11:51:11.488165   10734 main.go:141] libmachine: STDERR: 
	I1205 11:51:11.488225   10734 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2 +20000M
	I1205 11:51:11.496858   10734 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:11.496874   10734 main.go:141] libmachine: STDERR: 
	I1205 11:51:11.496885   10734 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2
	I1205 11:51:11.496891   10734 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:11.496902   10734 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:11.496946   10734 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:a4:0d:07:b9:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/false-907000/disk.qcow2
	I1205 11:51:11.498767   10734 main.go:141] libmachine: STDOUT: 
	I1205 11:51:11.498781   10734 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:11.498800   10734 client.go:171] duration metric: took 487.505875ms to LocalClient.Create
	I1205 11:51:13.500963   10734 start.go:128] duration metric: took 2.55577375s to createHost
	I1205 11:51:13.501033   10734 start.go:83] releasing machines lock for "false-907000", held for 2.556348667s
	W1205 11:51:13.501389   10734 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:13.517112   10734 out.go:201] 
	W1205 11:51:13.521152   10734 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:51:13.521181   10734 out.go:270] * 
	* 
	W1205 11:51:13.532779   10734 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:51:13.537152   10734 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.833349875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-907000" primary control-plane node in "enable-default-cni-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:51:15.825742   10843 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:51:15.825920   10843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:15.825923   10843 out.go:358] Setting ErrFile to fd 2...
	I1205 11:51:15.825926   10843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:15.826056   10843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:51:15.827143   10843 out.go:352] Setting JSON to false
	I1205 11:51:15.844698   10843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6644,"bootTime":1733421631,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:51:15.844768   10843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:51:15.851054   10843 out.go:177] * [enable-default-cni-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:51:15.857051   10843 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:51:15.857074   10843 notify.go:220] Checking for updates...
	I1205 11:51:15.863922   10843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:51:15.866949   10843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:51:15.870030   10843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:51:15.875166   10843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:51:15.877993   10843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:51:15.881378   10843 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:15.881457   10843 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:15.881508   10843 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:51:15.885986   10843 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:51:15.893023   10843 start.go:297] selected driver: qemu2
	I1205 11:51:15.893030   10843 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:51:15.893048   10843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:51:15.895687   10843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:51:15.899018   10843 out.go:177] * Automatically selected the socket_vmnet network
	E1205 11:51:15.902028   10843 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1205 11:51:15.902039   10843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:51:15.902053   10843 cni.go:84] Creating CNI manager for "bridge"
	I1205 11:51:15.902060   10843 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:51:15.902098   10843 start.go:340] cluster config:
	{Name:enable-default-cni-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:51:15.906788   10843 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:51:15.913925   10843 out.go:177] * Starting "enable-default-cni-907000" primary control-plane node in "enable-default-cni-907000" cluster
	I1205 11:51:15.917963   10843 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:51:15.917977   10843 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:51:15.917989   10843 cache.go:56] Caching tarball of preloaded images
	I1205 11:51:15.918064   10843 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:51:15.918070   10843 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:51:15.918127   10843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/enable-default-cni-907000/config.json ...
	I1205 11:51:15.918139   10843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/enable-default-cni-907000/config.json: {Name:mk6d9d148390433ce6d40182824ad4c57eb17d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:51:15.918504   10843 start.go:360] acquireMachinesLock for enable-default-cni-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:15.918560   10843 start.go:364] duration metric: took 45.708µs to acquireMachinesLock for "enable-default-cni-907000"
	I1205 11:51:15.918573   10843 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:15.918599   10843 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:15.922995   10843 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:15.941409   10843 start.go:159] libmachine.API.Create for "enable-default-cni-907000" (driver="qemu2")
	I1205 11:51:15.941439   10843 client.go:168] LocalClient.Create starting
	I1205 11:51:15.941519   10843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:15.941558   10843 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:15.941567   10843 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:15.941611   10843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:15.941642   10843 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:15.941650   10843 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:15.942059   10843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:16.098254   10843 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:16.192485   10843 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:16.192491   10843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:16.192699   10843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2
	I1205 11:51:16.203038   10843 main.go:141] libmachine: STDOUT: 
	I1205 11:51:16.203056   10843 main.go:141] libmachine: STDERR: 
	I1205 11:51:16.203113   10843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2 +20000M
	I1205 11:51:16.211694   10843 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:16.211709   10843 main.go:141] libmachine: STDERR: 
	I1205 11:51:16.211729   10843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2
	I1205 11:51:16.211734   10843 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:16.211745   10843 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:16.211775   10843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d8:7b:8f:53:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2
	I1205 11:51:16.213563   10843 main.go:141] libmachine: STDOUT: 
	I1205 11:51:16.213574   10843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:16.213591   10843 client.go:171] duration metric: took 272.15ms to LocalClient.Create
	I1205 11:51:18.215790   10843 start.go:128] duration metric: took 2.297177416s to createHost
	I1205 11:51:18.215889   10843 start.go:83] releasing machines lock for "enable-default-cni-907000", held for 2.29733975s
	W1205 11:51:18.215943   10843 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:18.232384   10843 out.go:177] * Deleting "enable-default-cni-907000" in qemu2 ...
	W1205 11:51:18.258509   10843 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:18.258549   10843 start.go:729] Will try again in 5 seconds ...
	I1205 11:51:23.260745   10843 start.go:360] acquireMachinesLock for enable-default-cni-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:23.261414   10843 start.go:364] duration metric: took 552.25µs to acquireMachinesLock for "enable-default-cni-907000"
	I1205 11:51:23.261543   10843 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:23.261829   10843 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:23.276518   10843 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:23.329586   10843 start.go:159] libmachine.API.Create for "enable-default-cni-907000" (driver="qemu2")
	I1205 11:51:23.329638   10843 client.go:168] LocalClient.Create starting
	I1205 11:51:23.329800   10843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:23.329900   10843 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:23.329917   10843 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:23.329977   10843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:23.330036   10843 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:23.330052   10843 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:23.330743   10843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:23.499646   10843 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:23.557310   10843 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:23.557315   10843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:23.557516   10843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2
	I1205 11:51:23.567681   10843 main.go:141] libmachine: STDOUT: 
	I1205 11:51:23.567707   10843 main.go:141] libmachine: STDERR: 
	I1205 11:51:23.567768   10843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2 +20000M
	I1205 11:51:23.576356   10843 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:23.576372   10843 main.go:141] libmachine: STDERR: 
	I1205 11:51:23.576382   10843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2
	I1205 11:51:23.576398   10843 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:23.576412   10843 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:23.576449   10843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:4f:fc:f6:32:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/enable-default-cni-907000/disk.qcow2
	I1205 11:51:23.578255   10843 main.go:141] libmachine: STDOUT: 
	I1205 11:51:23.578268   10843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:23.578280   10843 client.go:171] duration metric: took 248.637458ms to LocalClient.Create
	I1205 11:51:25.580440   10843 start.go:128] duration metric: took 2.318605667s to createHost
	I1205 11:51:25.580514   10843 start.go:83] releasing machines lock for "enable-default-cni-907000", held for 2.319094333s
	W1205 11:51:25.580967   10843 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:25.594570   10843 out.go:201] 
	W1205 11:51:25.598828   10843 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:51:25.598853   10843 out.go:270] * 
	* 
	W1205 11:51:25.601461   10843 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:51:25.611550   10843 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.791743208s)

                                                
                                                
-- stdout --
	* [flannel-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-907000" primary control-plane node in "flannel-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:51:27.902694   10954 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:51:27.902853   10954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:27.902856   10954 out.go:358] Setting ErrFile to fd 2...
	I1205 11:51:27.902859   10954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:27.902971   10954 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:51:27.904089   10954 out.go:352] Setting JSON to false
	I1205 11:51:27.921776   10954 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6656,"bootTime":1733421631,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:51:27.921864   10954 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:51:27.928050   10954 out.go:177] * [flannel-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:51:27.934991   10954 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:51:27.935044   10954 notify.go:220] Checking for updates...
	I1205 11:51:27.940382   10954 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:51:27.943018   10954 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:51:27.945981   10954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:51:27.949032   10954 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:51:27.951997   10954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:51:27.955366   10954 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:27.955440   10954 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:27.955491   10954 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:51:27.960014   10954 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:51:27.966966   10954 start.go:297] selected driver: qemu2
	I1205 11:51:27.966973   10954 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:51:27.966979   10954 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:51:27.969459   10954 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:51:27.973025   10954 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:51:27.976145   10954 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:51:27.976166   10954 cni.go:84] Creating CNI manager for "flannel"
	I1205 11:51:27.976169   10954 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1205 11:51:27.976204   10954 start.go:340] cluster config:
	{Name:flannel-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:51:27.980713   10954 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:51:27.987827   10954 out.go:177] * Starting "flannel-907000" primary control-plane node in "flannel-907000" cluster
	I1205 11:51:27.991990   10954 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:51:27.992014   10954 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:51:27.992025   10954 cache.go:56] Caching tarball of preloaded images
	I1205 11:51:27.992110   10954 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:51:27.992116   10954 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:51:27.992175   10954 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/flannel-907000/config.json ...
	I1205 11:51:27.992186   10954 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/flannel-907000/config.json: {Name:mkb4ad2d157d99893dbe4abc3251152528677987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:51:27.992525   10954 start.go:360] acquireMachinesLock for flannel-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:27.992572   10954 start.go:364] duration metric: took 41.375µs to acquireMachinesLock for "flannel-907000"
	I1205 11:51:27.992584   10954 start.go:93] Provisioning new machine with config: &{Name:flannel-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:27.992615   10954 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:27.996830   10954 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:28.014844   10954 start.go:159] libmachine.API.Create for "flannel-907000" (driver="qemu2")
	I1205 11:51:28.014872   10954 client.go:168] LocalClient.Create starting
	I1205 11:51:28.014956   10954 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:28.014991   10954 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:28.015000   10954 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:28.015030   10954 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:28.015058   10954 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:28.015067   10954 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:28.015444   10954 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:28.172722   10954 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:28.228884   10954 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:28.228889   10954 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:28.229071   10954 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2
	I1205 11:51:28.239044   10954 main.go:141] libmachine: STDOUT: 
	I1205 11:51:28.239066   10954 main.go:141] libmachine: STDERR: 
	I1205 11:51:28.239119   10954 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2 +20000M
	I1205 11:51:28.247730   10954 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:28.247749   10954 main.go:141] libmachine: STDERR: 
	I1205 11:51:28.247764   10954 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2
	I1205 11:51:28.247768   10954 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:28.247781   10954 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:28.247819   10954 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c3:ed:b5:11:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2
	I1205 11:51:28.249611   10954 main.go:141] libmachine: STDOUT: 
	I1205 11:51:28.249624   10954 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:28.249643   10954 client.go:171] duration metric: took 234.768292ms to LocalClient.Create
	I1205 11:51:30.251825   10954 start.go:128] duration metric: took 2.259205666s to createHost
	I1205 11:51:30.251897   10954 start.go:83] releasing machines lock for "flannel-907000", held for 2.259335584s
	W1205 11:51:30.251948   10954 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:30.267286   10954 out.go:177] * Deleting "flannel-907000" in qemu2 ...
	W1205 11:51:30.293595   10954 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:30.293621   10954 start.go:729] Will try again in 5 seconds ...
	I1205 11:51:35.294524   10954 start.go:360] acquireMachinesLock for flannel-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:35.295182   10954 start.go:364] duration metric: took 563.416µs to acquireMachinesLock for "flannel-907000"
	I1205 11:51:35.295315   10954 start.go:93] Provisioning new machine with config: &{Name:flannel-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:35.295548   10954 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:35.304441   10954 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:35.359287   10954 start.go:159] libmachine.API.Create for "flannel-907000" (driver="qemu2")
	I1205 11:51:35.359344   10954 client.go:168] LocalClient.Create starting
	I1205 11:51:35.359487   10954 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:35.359568   10954 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:35.359584   10954 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:35.359643   10954 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:35.359703   10954 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:35.359715   10954 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:35.360404   10954 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:35.530453   10954 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:35.589021   10954 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:35.589026   10954 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:35.589206   10954 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2
	I1205 11:51:35.599231   10954 main.go:141] libmachine: STDOUT: 
	I1205 11:51:35.599251   10954 main.go:141] libmachine: STDERR: 
	I1205 11:51:35.599306   10954 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2 +20000M
	I1205 11:51:35.607818   10954 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:35.607832   10954 main.go:141] libmachine: STDERR: 
	I1205 11:51:35.607843   10954 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2
	I1205 11:51:35.607848   10954 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:35.607859   10954 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:35.607904   10954 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:b9:09:68:89:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/flannel-907000/disk.qcow2
	I1205 11:51:35.609751   10954 main.go:141] libmachine: STDOUT: 
	I1205 11:51:35.609770   10954 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:35.609783   10954 client.go:171] duration metric: took 250.43675ms to LocalClient.Create
	I1205 11:51:37.612038   10954 start.go:128] duration metric: took 2.316471333s to createHost
	I1205 11:51:37.612108   10954 start.go:83] releasing machines lock for "flannel-907000", held for 2.316922792s
	W1205 11:51:37.612439   10954 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:37.626062   10954 out.go:201] 
	W1205 11:51:37.630268   10954 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:51:37.630291   10954 out.go:270] * 
	* 
	W1205 11:51:37.632950   10954 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:51:37.647137   10954 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.995772084s)

                                                
                                                
-- stdout --
	* [bridge-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-907000" primary control-plane node in "bridge-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:51:40.147354   11072 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:51:40.147511   11072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:40.147514   11072 out.go:358] Setting ErrFile to fd 2...
	I1205 11:51:40.147516   11072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:40.147646   11072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:51:40.148769   11072 out.go:352] Setting JSON to false
	I1205 11:51:40.166360   11072 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6669,"bootTime":1733421631,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:51:40.166441   11072 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:51:40.171559   11072 out.go:177] * [bridge-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:51:40.178574   11072 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:51:40.178665   11072 notify.go:220] Checking for updates...
	I1205 11:51:40.185528   11072 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:51:40.188524   11072 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:51:40.191523   11072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:51:40.194557   11072 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:51:40.197572   11072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:51:40.199472   11072 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:40.199549   11072 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:40.199609   11072 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:51:40.203496   11072 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:51:40.210378   11072 start.go:297] selected driver: qemu2
	I1205 11:51:40.210386   11072 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:51:40.210398   11072 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:51:40.213003   11072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:51:40.216503   11072 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:51:40.219666   11072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:51:40.219687   11072 cni.go:84] Creating CNI manager for "bridge"
	I1205 11:51:40.219690   11072 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:51:40.219727   11072 start.go:340] cluster config:
	{Name:bridge-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:51:40.224320   11072 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:51:40.231581   11072 out.go:177] * Starting "bridge-907000" primary control-plane node in "bridge-907000" cluster
	I1205 11:51:40.235577   11072 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:51:40.235594   11072 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:51:40.235606   11072 cache.go:56] Caching tarball of preloaded images
	I1205 11:51:40.235695   11072 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:51:40.235701   11072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:51:40.235757   11072 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/bridge-907000/config.json ...
	I1205 11:51:40.235771   11072 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/bridge-907000/config.json: {Name:mk1c2b461c084f1ceb1627aee30301377a9741e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:51:40.236120   11072 start.go:360] acquireMachinesLock for bridge-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:40.236167   11072 start.go:364] duration metric: took 42µs to acquireMachinesLock for "bridge-907000"
	I1205 11:51:40.236178   11072 start.go:93] Provisioning new machine with config: &{Name:bridge-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:40.236209   11072 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:40.244546   11072 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:40.262351   11072 start.go:159] libmachine.API.Create for "bridge-907000" (driver="qemu2")
	I1205 11:51:40.262384   11072 client.go:168] LocalClient.Create starting
	I1205 11:51:40.262456   11072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:40.262493   11072 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:40.262507   11072 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:40.262544   11072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:40.262573   11072 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:40.262582   11072 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:40.262998   11072 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:40.420622   11072 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:40.529455   11072 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:40.529461   11072 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:40.529660   11072 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2
	I1205 11:51:40.539893   11072 main.go:141] libmachine: STDOUT: 
	I1205 11:51:40.539919   11072 main.go:141] libmachine: STDERR: 
	I1205 11:51:40.539976   11072 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2 +20000M
	I1205 11:51:40.548535   11072 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:40.548549   11072 main.go:141] libmachine: STDERR: 
	I1205 11:51:40.548563   11072 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2
	I1205 11:51:40.548567   11072 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:40.548579   11072 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:40.548610   11072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:85:a0:62:24:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2
	I1205 11:51:40.550415   11072 main.go:141] libmachine: STDOUT: 
	I1205 11:51:40.550435   11072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:40.550453   11072 client.go:171] duration metric: took 288.067167ms to LocalClient.Create
	I1205 11:51:42.552610   11072 start.go:128] duration metric: took 2.316394s to createHost
	I1205 11:51:42.552670   11072 start.go:83] releasing machines lock for "bridge-907000", held for 2.316514625s
	W1205 11:51:42.552749   11072 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:42.563824   11072 out.go:177] * Deleting "bridge-907000" in qemu2 ...
	W1205 11:51:42.592187   11072 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:42.592222   11072 start.go:729] Will try again in 5 seconds ...
	I1205 11:51:47.594418   11072 start.go:360] acquireMachinesLock for bridge-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:47.594977   11072 start.go:364] duration metric: took 428.792µs to acquireMachinesLock for "bridge-907000"
	I1205 11:51:47.595078   11072 start.go:93] Provisioning new machine with config: &{Name:bridge-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:47.595389   11072 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:47.610188   11072 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:47.660334   11072 start.go:159] libmachine.API.Create for "bridge-907000" (driver="qemu2")
	I1205 11:51:47.660402   11072 client.go:168] LocalClient.Create starting
	I1205 11:51:47.660595   11072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:47.660690   11072 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:47.660708   11072 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:47.660784   11072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:47.660856   11072 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:47.660872   11072 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:47.661591   11072 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:47.829798   11072 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:48.040318   11072 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:48.040330   11072 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:48.040595   11072 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2
	I1205 11:51:48.051648   11072 main.go:141] libmachine: STDOUT: 
	I1205 11:51:48.051669   11072 main.go:141] libmachine: STDERR: 
	I1205 11:51:48.051725   11072 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2 +20000M
	I1205 11:51:48.060346   11072 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:48.060366   11072 main.go:141] libmachine: STDERR: 
	I1205 11:51:48.060379   11072 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2
	I1205 11:51:48.060383   11072 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:48.060391   11072 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:48.060422   11072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:4c:ad:0e:70:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/bridge-907000/disk.qcow2
	I1205 11:51:48.062179   11072 main.go:141] libmachine: STDOUT: 
	I1205 11:51:48.062191   11072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:48.062203   11072 client.go:171] duration metric: took 401.789417ms to LocalClient.Create
	I1205 11:51:50.064362   11072 start.go:128] duration metric: took 2.468957459s to createHost
	I1205 11:51:50.064423   11072 start.go:83] releasing machines lock for "bridge-907000", held for 2.469443041s
	W1205 11:51:50.064844   11072 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:50.078523   11072 out.go:201] 
	W1205 11:51:50.083594   11072 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:51:50.083654   11072 out.go:270] * 
	* 
	W1205 11:51:50.086203   11072 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:51:50.096486   11072 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-907000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.856026875s)

                                                
                                                
-- stdout --
	* [kubenet-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-907000" primary control-plane node in "kubenet-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:51:52.474633   11185 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:51:52.474791   11185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:52.474795   11185 out.go:358] Setting ErrFile to fd 2...
	I1205 11:51:52.474797   11185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:51:52.474917   11185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:51:52.476064   11185 out.go:352] Setting JSON to false
	I1205 11:51:52.493736   11185 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6681,"bootTime":1733421631,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:51:52.493805   11185 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:51:52.499960   11185 out.go:177] * [kubenet-907000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:51:52.506827   11185 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:51:52.506877   11185 notify.go:220] Checking for updates...
	I1205 11:51:52.513938   11185 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:51:52.515460   11185 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:51:52.518884   11185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:51:52.521933   11185 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:51:52.524905   11185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:51:52.528208   11185 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:52.528285   11185 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:51:52.528341   11185 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:51:52.532901   11185 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:51:52.539893   11185 start.go:297] selected driver: qemu2
	I1205 11:51:52.539900   11185 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:51:52.539905   11185 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:51:52.542350   11185 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:51:52.544855   11185 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:51:52.548014   11185 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:51:52.548045   11185 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1205 11:51:52.548072   11185 start.go:340] cluster config:
	{Name:kubenet-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:51:52.552703   11185 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:51:52.559942   11185 out.go:177] * Starting "kubenet-907000" primary control-plane node in "kubenet-907000" cluster
	I1205 11:51:52.562870   11185 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:51:52.562889   11185 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:51:52.562904   11185 cache.go:56] Caching tarball of preloaded images
	I1205 11:51:52.563005   11185 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:51:52.563011   11185 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:51:52.563071   11185 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/kubenet-907000/config.json ...
	I1205 11:51:52.563082   11185 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/kubenet-907000/config.json: {Name:mk7bcc4e4e16b170962e7c8a7341027c3134369c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:51:52.563416   11185 start.go:360] acquireMachinesLock for kubenet-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:52.563468   11185 start.go:364] duration metric: took 45.291µs to acquireMachinesLock for "kubenet-907000"
	I1205 11:51:52.563480   11185 start.go:93] Provisioning new machine with config: &{Name:kubenet-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:52.563513   11185 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:52.567956   11185 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:52.585999   11185 start.go:159] libmachine.API.Create for "kubenet-907000" (driver="qemu2")
	I1205 11:51:52.586055   11185 client.go:168] LocalClient.Create starting
	I1205 11:51:52.586166   11185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:52.586213   11185 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:52.586231   11185 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:52.586269   11185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:52.586302   11185 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:52.586311   11185 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:52.586784   11185 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:51:52.745083   11185 main.go:141] libmachine: Creating SSH key...
	I1205 11:51:52.789805   11185 main.go:141] libmachine: Creating Disk image...
	I1205 11:51:52.789811   11185 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:51:52.790004   11185 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2
	I1205 11:51:52.799901   11185 main.go:141] libmachine: STDOUT: 
	I1205 11:51:52.799922   11185 main.go:141] libmachine: STDERR: 
	I1205 11:51:52.799972   11185 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2 +20000M
	I1205 11:51:52.808452   11185 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:51:52.808469   11185 main.go:141] libmachine: STDERR: 
	I1205 11:51:52.808484   11185 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2
	I1205 11:51:52.808490   11185 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:51:52.808509   11185 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:51:52.808537   11185 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f4:30:ba:2d:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2
	I1205 11:51:52.810270   11185 main.go:141] libmachine: STDOUT: 
	I1205 11:51:52.810284   11185 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:51:52.810303   11185 client.go:171] duration metric: took 224.232583ms to LocalClient.Create
	I1205 11:51:54.812487   11185 start.go:128] duration metric: took 2.24896225s to createHost
	I1205 11:51:54.812560   11185 start.go:83] releasing machines lock for "kubenet-907000", held for 2.249103292s
	W1205 11:51:54.812606   11185 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:54.822929   11185 out.go:177] * Deleting "kubenet-907000" in qemu2 ...
	W1205 11:51:54.848586   11185 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:51:54.848623   11185 start.go:729] Will try again in 5 seconds ...
	I1205 11:51:59.850768   11185 start.go:360] acquireMachinesLock for kubenet-907000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:51:59.851399   11185 start.go:364] duration metric: took 532.208µs to acquireMachinesLock for "kubenet-907000"
	I1205 11:51:59.851529   11185 start.go:93] Provisioning new machine with config: &{Name:kubenet-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:51:59.851772   11185 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:51:59.866604   11185 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:51:59.917507   11185 start.go:159] libmachine.API.Create for "kubenet-907000" (driver="qemu2")
	I1205 11:51:59.917558   11185 client.go:168] LocalClient.Create starting
	I1205 11:51:59.917701   11185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:51:59.917789   11185 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:59.917803   11185 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:59.917862   11185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:51:59.917925   11185 main.go:141] libmachine: Decoding PEM data...
	I1205 11:51:59.917936   11185 main.go:141] libmachine: Parsing certificate...
	I1205 11:51:59.918630   11185 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:52:00.085382   11185 main.go:141] libmachine: Creating SSH key...
	I1205 11:52:00.232757   11185 main.go:141] libmachine: Creating Disk image...
	I1205 11:52:00.232766   11185 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:52:00.232983   11185 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2
	I1205 11:52:00.243430   11185 main.go:141] libmachine: STDOUT: 
	I1205 11:52:00.243449   11185 main.go:141] libmachine: STDERR: 
	I1205 11:52:00.243502   11185 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2 +20000M
	I1205 11:52:00.252197   11185 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:52:00.252213   11185 main.go:141] libmachine: STDERR: 
	I1205 11:52:00.252225   11185 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2
	I1205 11:52:00.252229   11185 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:52:00.252238   11185 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:00.252289   11185 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:0e:51:24:eb:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/kubenet-907000/disk.qcow2
	I1205 11:52:00.254059   11185 main.go:141] libmachine: STDOUT: 
	I1205 11:52:00.254071   11185 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:00.254084   11185 client.go:171] duration metric: took 336.524541ms to LocalClient.Create
	I1205 11:52:02.256244   11185 start.go:128] duration metric: took 2.404462584s to createHost
	I1205 11:52:02.256302   11185 start.go:83] releasing machines lock for "kubenet-907000", held for 2.404897541s
	W1205 11:52:02.256779   11185 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:02.271461   11185 out.go:201] 
	W1205 11:52:02.273607   11185 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:02.273661   11185 out.go:270] * 
	* 
	W1205 11:52:02.276079   11185 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:52:02.284419   11185 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-547000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-547000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.788260542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-547000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-547000" primary control-plane node in "old-k8s-version-547000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:52:04.652890   11297 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:52:04.653082   11297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:04.653085   11297 out.go:358] Setting ErrFile to fd 2...
	I1205 11:52:04.653087   11297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:04.653232   11297 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:52:04.654369   11297 out.go:352] Setting JSON to false
	I1205 11:52:04.672155   11297 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6693,"bootTime":1733421631,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:52:04.672225   11297 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:52:04.678552   11297 out.go:177] * [old-k8s-version-547000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:52:04.685527   11297 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:52:04.685575   11297 notify.go:220] Checking for updates...
	I1205 11:52:04.692505   11297 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:52:04.695499   11297 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:52:04.698544   11297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:52:04.701483   11297 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:52:04.704495   11297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:52:04.707898   11297 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:04.707980   11297 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:04.708028   11297 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:52:04.711419   11297 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:52:04.718507   11297 start.go:297] selected driver: qemu2
	I1205 11:52:04.718514   11297 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:52:04.718522   11297 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:52:04.721075   11297 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:52:04.722491   11297 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:52:04.725608   11297 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:52:04.725630   11297 cni.go:84] Creating CNI manager for ""
	I1205 11:52:04.725668   11297 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 11:52:04.725703   11297 start.go:340] cluster config:
	{Name:old-k8s-version-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:04.730348   11297 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:04.737443   11297 out.go:177] * Starting "old-k8s-version-547000" primary control-plane node in "old-k8s-version-547000" cluster
	I1205 11:52:04.741492   11297 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:52:04.741508   11297 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:52:04.741518   11297 cache.go:56] Caching tarball of preloaded images
	I1205 11:52:04.741609   11297 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:52:04.741615   11297 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 11:52:04.741665   11297 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/old-k8s-version-547000/config.json ...
	I1205 11:52:04.741677   11297 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/old-k8s-version-547000/config.json: {Name:mkaf9e9090825588a0a9e0fe7efb65a6e84c4b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:52:04.742068   11297 start.go:360] acquireMachinesLock for old-k8s-version-547000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:04.742116   11297 start.go:364] duration metric: took 41.208µs to acquireMachinesLock for "old-k8s-version-547000"
	I1205 11:52:04.742127   11297 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:52:04.742183   11297 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:52:04.746482   11297 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:52:04.763975   11297 start.go:159] libmachine.API.Create for "old-k8s-version-547000" (driver="qemu2")
	I1205 11:52:04.764003   11297 client.go:168] LocalClient.Create starting
	I1205 11:52:04.764074   11297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:52:04.764111   11297 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:04.764126   11297 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:04.764160   11297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:52:04.764188   11297 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:04.764198   11297 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:04.764645   11297 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:52:04.930329   11297 main.go:141] libmachine: Creating SSH key...
	I1205 11:52:05.003502   11297 main.go:141] libmachine: Creating Disk image...
	I1205 11:52:05.003508   11297 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:52:05.003700   11297 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2
	I1205 11:52:05.013882   11297 main.go:141] libmachine: STDOUT: 
	I1205 11:52:05.013898   11297 main.go:141] libmachine: STDERR: 
	I1205 11:52:05.013951   11297 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2 +20000M
	I1205 11:52:05.022579   11297 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:52:05.022594   11297 main.go:141] libmachine: STDERR: 
	I1205 11:52:05.022610   11297 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2
	I1205 11:52:05.022624   11297 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:52:05.022642   11297 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:05.022669   11297 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f0:7b:b0:79:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2
	I1205 11:52:05.024499   11297 main.go:141] libmachine: STDOUT: 
	I1205 11:52:05.024521   11297 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:05.024538   11297 client.go:171] duration metric: took 260.533208ms to LocalClient.Create
	I1205 11:52:07.026710   11297 start.go:128] duration metric: took 2.284525583s to createHost
	I1205 11:52:07.026783   11297 start.go:83] releasing machines lock for "old-k8s-version-547000", held for 2.2846785s
	W1205 11:52:07.026840   11297 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:07.038085   11297 out.go:177] * Deleting "old-k8s-version-547000" in qemu2 ...
	W1205 11:52:07.063573   11297 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:07.063601   11297 start.go:729] Will try again in 5 seconds ...
	I1205 11:52:12.063890   11297 start.go:360] acquireMachinesLock for old-k8s-version-547000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:12.064506   11297 start.go:364] duration metric: took 509.291µs to acquireMachinesLock for "old-k8s-version-547000"
	I1205 11:52:12.064620   11297 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:52:12.064947   11297 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:52:12.070588   11297 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:52:12.120101   11297 start.go:159] libmachine.API.Create for "old-k8s-version-547000" (driver="qemu2")
	I1205 11:52:12.120149   11297 client.go:168] LocalClient.Create starting
	I1205 11:52:12.120270   11297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:52:12.120339   11297 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:12.120358   11297 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:12.120415   11297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:52:12.120472   11297 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:12.120484   11297 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:12.121514   11297 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:52:12.292930   11297 main.go:141] libmachine: Creating SSH key...
	I1205 11:52:12.338882   11297 main.go:141] libmachine: Creating Disk image...
	I1205 11:52:12.338886   11297 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:52:12.339061   11297 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2
	I1205 11:52:12.349040   11297 main.go:141] libmachine: STDOUT: 
	I1205 11:52:12.349070   11297 main.go:141] libmachine: STDERR: 
	I1205 11:52:12.349125   11297 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2 +20000M
	I1205 11:52:12.357754   11297 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:52:12.357769   11297 main.go:141] libmachine: STDERR: 
	I1205 11:52:12.357786   11297 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2
	I1205 11:52:12.357791   11297 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:52:12.357806   11297 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:12.357837   11297 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:3b:f0:8a:fb:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2
	I1205 11:52:12.359648   11297 main.go:141] libmachine: STDOUT: 
	I1205 11:52:12.359660   11297 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:12.359676   11297 client.go:171] duration metric: took 239.698084ms to LocalClient.Create
	I1205 11:52:14.360487   11297 start.go:128] duration metric: took 2.297110208s to createHost
	I1205 11:52:14.360536   11297 start.go:83] releasing machines lock for "old-k8s-version-547000", held for 2.297602083s
	W1205 11:52:14.360732   11297 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:14.372130   11297 out.go:201] 
	W1205 11:52:14.376368   11297 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:14.376393   11297 out.go:270] * 
	* 
	W1205 11:52:14.379191   11297 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:52:14.391133   11297 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-547000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (72.353542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-547000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-547000 create -f testdata/busybox.yaml: exit status 1 (29.100667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-547000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-547000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (34.534584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (34.108125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-547000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-547000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-547000 describe deploy/metrics-server -n kube-system: exit status 1 (27.576333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-547000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-547000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (33.768792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-547000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-547000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.198792083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-547000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-547000" primary control-plane node in "old-k8s-version-547000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-547000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-547000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:52:18.357128   11346 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:52:18.357316   11346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:18.357319   11346 out.go:358] Setting ErrFile to fd 2...
	I1205 11:52:18.357321   11346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:18.357447   11346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:52:18.358515   11346 out.go:352] Setting JSON to false
	I1205 11:52:18.376128   11346 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6707,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:52:18.376196   11346 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:52:18.381064   11346 out.go:177] * [old-k8s-version-547000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:52:18.389015   11346 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:52:18.389075   11346 notify.go:220] Checking for updates...
	I1205 11:52:18.396027   11346 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:52:18.398936   11346 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:52:18.401984   11346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:52:18.404968   11346 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:52:18.407960   11346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:52:18.411301   11346 config.go:182] Loaded profile config "old-k8s-version-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1205 11:52:18.414978   11346 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 11:52:18.417951   11346 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:52:18.421981   11346 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:52:18.428912   11346 start.go:297] selected driver: qemu2
	I1205 11:52:18.428918   11346 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:18.428973   11346 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:52:18.431571   11346 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:52:18.431601   11346 cni.go:84] Creating CNI manager for ""
	I1205 11:52:18.431623   11346 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 11:52:18.431646   11346 start.go:340] cluster config:
	{Name:old-k8s-version-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-547000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:18.436345   11346 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:18.443928   11346 out.go:177] * Starting "old-k8s-version-547000" primary control-plane node in "old-k8s-version-547000" cluster
	I1205 11:52:18.447982   11346 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:52:18.448002   11346 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:52:18.448021   11346 cache.go:56] Caching tarball of preloaded images
	I1205 11:52:18.448110   11346 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:52:18.448116   11346 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 11:52:18.448166   11346 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/old-k8s-version-547000/config.json ...
	I1205 11:52:18.448626   11346 start.go:360] acquireMachinesLock for old-k8s-version-547000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:18.448678   11346 start.go:364] duration metric: took 45.167µs to acquireMachinesLock for "old-k8s-version-547000"
	I1205 11:52:18.448687   11346 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:52:18.448692   11346 fix.go:54] fixHost starting: 
	I1205 11:52:18.448831   11346 fix.go:112] recreateIfNeeded on old-k8s-version-547000: state=Stopped err=<nil>
	W1205 11:52:18.448839   11346 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:52:18.452974   11346 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-547000" ...
	I1205 11:52:18.460784   11346 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:18.460833   11346 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:3b:f0:8a:fb:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2
	I1205 11:52:18.463222   11346 main.go:141] libmachine: STDOUT: 
	I1205 11:52:18.463243   11346 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:18.463271   11346 fix.go:56] duration metric: took 14.585625ms for fixHost
	I1205 11:52:18.463276   11346 start.go:83] releasing machines lock for "old-k8s-version-547000", held for 14.601208ms
	W1205 11:52:18.463283   11346 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:18.463315   11346 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:18.463319   11346 start.go:729] Will try again in 5 seconds ...
	I1205 11:52:23.463419   11346 start.go:360] acquireMachinesLock for old-k8s-version-547000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:23.463868   11346 start.go:364] duration metric: took 363.125µs to acquireMachinesLock for "old-k8s-version-547000"
	I1205 11:52:23.463976   11346 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:52:23.464000   11346 fix.go:54] fixHost starting: 
	I1205 11:52:23.464670   11346 fix.go:112] recreateIfNeeded on old-k8s-version-547000: state=Stopped err=<nil>
	W1205 11:52:23.464695   11346 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:52:23.474002   11346 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-547000" ...
	I1205 11:52:23.477782   11346 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:23.478047   11346 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:3b:f0:8a:fb:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/old-k8s-version-547000/disk.qcow2
	I1205 11:52:23.487556   11346 main.go:141] libmachine: STDOUT: 
	I1205 11:52:23.487598   11346 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:23.487653   11346 fix.go:56] duration metric: took 23.666375ms for fixHost
	I1205 11:52:23.487672   11346 start.go:83] releasing machines lock for "old-k8s-version-547000", held for 23.790459ms
	W1205 11:52:23.487853   11346 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-547000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-547000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:23.494999   11346 out.go:201] 
	W1205 11:52:23.499048   11346 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:23.499088   11346 out.go:270] * 
	* 
	W1205 11:52:23.501726   11346 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:52:23.507973   11346 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-547000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (73.131417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-547000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (34.779167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-547000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-547000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-547000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.231291ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-547000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-547000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (33.878709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-547000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (34.62175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-547000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-547000 --alsologtostderr -v=1: exit status 83 (43.284875ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-547000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-547000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:52:23.803281   11365 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:52:23.803721   11365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:23.803725   11365 out.go:358] Setting ErrFile to fd 2...
	I1205 11:52:23.803728   11365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:23.803857   11365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:52:23.804091   11365 out.go:352] Setting JSON to false
	I1205 11:52:23.804097   11365 mustload.go:65] Loading cluster: old-k8s-version-547000
	I1205 11:52:23.804319   11365 config.go:182] Loaded profile config "old-k8s-version-547000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1205 11:52:23.807379   11365 out.go:177] * The control-plane node old-k8s-version-547000 host is not running: state=Stopped
	I1205 11:52:23.811209   11365 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-547000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-547000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (33.465625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (34.111292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-547000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-911000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-911000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.986501084s)

                                                
                                                
-- stdout --
	* [no-preload-911000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-911000" primary control-plane node in "no-preload-911000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-911000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:52:24.143701   11382 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:52:24.143858   11382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:24.143862   11382 out.go:358] Setting ErrFile to fd 2...
	I1205 11:52:24.143864   11382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:24.144001   11382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:52:24.145189   11382 out.go:352] Setting JSON to false
	I1205 11:52:24.162921   11382 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6713,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:52:24.162991   11382 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:52:24.168161   11382 out.go:177] * [no-preload-911000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:52:24.175147   11382 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:52:24.175191   11382 notify.go:220] Checking for updates...
	I1205 11:52:24.182111   11382 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:52:24.185082   11382 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:52:24.188120   11382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:52:24.191114   11382 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:52:24.194098   11382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:52:24.197474   11382 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:24.197531   11382 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:24.197588   11382 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:52:24.202030   11382 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:52:24.209086   11382 start.go:297] selected driver: qemu2
	I1205 11:52:24.209093   11382 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:52:24.209100   11382 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:52:24.211647   11382 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:52:24.215059   11382 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:52:24.218172   11382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:52:24.218193   11382 cni.go:84] Creating CNI manager for ""
	I1205 11:52:24.218215   11382 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:52:24.218220   11382 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:52:24.218262   11382 start.go:340] cluster config:
	{Name:no-preload-911000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:24.222893   11382 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.229030   11382 out.go:177] * Starting "no-preload-911000" primary control-plane node in "no-preload-911000" cluster
	I1205 11:52:24.233051   11382 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:52:24.233133   11382 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/no-preload-911000/config.json ...
	I1205 11:52:24.233153   11382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/no-preload-911000/config.json: {Name:mk9dfd9f775f52e10fdd6bb7eeb1d0e983ae5368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:52:24.233151   11382 cache.go:107] acquiring lock: {Name:mkf2f9504745b78223a295d4db642411c341d99c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.233157   11382 cache.go:107] acquiring lock: {Name:mk7219f76a22d5b096826cbf727e0bb07efaf64f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.233175   11382 cache.go:107] acquiring lock: {Name:mk87b951b2b06333a174465532e0256cdd77d392 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.233264   11382 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 11:52:24.233270   11382 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.916µs
	I1205 11:52:24.233152   11382 cache.go:107] acquiring lock: {Name:mk74dc6dc479b3065d11ea908863b4e2fb98f17c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.233364   11382 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 11:52:24.233381   11382 cache.go:107] acquiring lock: {Name:mk0815619acb2188fc7cb8d1aebe01cfbad60b71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.233385   11382 cache.go:107] acquiring lock: {Name:mk2ffebbc051c993bb191ba8e7efbca53cdbf72b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.233415   11382 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 11:52:24.233471   11382 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 11:52:24.233433   11382 cache.go:107] acquiring lock: {Name:mkbb5eeb0dc79c12551e6c3c9ea52fcde4e3662e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.233466   11382 cache.go:107] acquiring lock: {Name:mk488c700ecee4217e810749b3c8e1a89a848a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:24.233556   11382 start.go:360] acquireMachinesLock for no-preload-911000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:24.233567   11382 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 11:52:24.233753   11382 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 11:52:24.233758   11382 start.go:364] duration metric: took 194.75µs to acquireMachinesLock for "no-preload-911000"
	I1205 11:52:24.233777   11382 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 11:52:24.233770   11382 start.go:93] Provisioning new machine with config: &{Name:no-preload-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:52:24.233801   11382 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:52:24.233910   11382 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 11:52:24.233954   11382 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 11:52:24.241032   11382 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:52:24.244749   11382 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 11:52:24.245947   11382 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 11:52:24.245951   11382 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 11:52:24.246260   11382 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 11:52:24.246265   11382 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 11:52:24.247761   11382 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 11:52:24.248069   11382 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 11:52:24.259184   11382 start.go:159] libmachine.API.Create for "no-preload-911000" (driver="qemu2")
	I1205 11:52:24.259202   11382 client.go:168] LocalClient.Create starting
	I1205 11:52:24.259278   11382 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:52:24.259314   11382 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:24.259322   11382 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:24.259361   11382 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:52:24.259390   11382 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:24.259397   11382 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:24.259785   11382 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:52:24.418301   11382 main.go:141] libmachine: Creating SSH key...
	I1205 11:52:24.473523   11382 main.go:141] libmachine: Creating Disk image...
	I1205 11:52:24.473539   11382 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:52:24.473752   11382 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2
	I1205 11:52:24.483722   11382 main.go:141] libmachine: STDOUT: 
	I1205 11:52:24.483740   11382 main.go:141] libmachine: STDERR: 
	I1205 11:52:24.483797   11382 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2 +20000M
	I1205 11:52:24.492991   11382 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:52:24.493012   11382 main.go:141] libmachine: STDERR: 
	I1205 11:52:24.493035   11382 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2
	I1205 11:52:24.493040   11382 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:52:24.493051   11382 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:24.493084   11382 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:4e:41:54:3d:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2
	I1205 11:52:24.495102   11382 main.go:141] libmachine: STDOUT: 
	I1205 11:52:24.495117   11382 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:24.495134   11382 client.go:171] duration metric: took 236.011459ms to LocalClient.Create
	I1205 11:52:24.679901   11382 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1205 11:52:24.684759   11382 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 11:52:24.721543   11382 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 11:52:24.809288   11382 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 11:52:24.835997   11382 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1205 11:52:24.836008   11382 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 602.833625ms
	I1205 11:52:24.836014   11382 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1205 11:52:24.880493   11382 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 11:52:24.936252   11382 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 11:52:25.037654   11382 cache.go:162] opening:  /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1205 11:52:26.494951   11382 start.go:128] duration metric: took 2.261846291s to createHost
	I1205 11:52:26.495024   11382 start.go:83] releasing machines lock for "no-preload-911000", held for 2.261989083s
	W1205 11:52:26.495085   11382 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:26.509266   11382 out.go:177] * Deleting "no-preload-911000" in qemu2 ...
	W1205 11:52:26.534115   11382 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:26.534145   11382 start.go:729] Will try again in 5 seconds ...
	I1205 11:52:27.965855   11382 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1205 11:52:27.965932   11382 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 3.73393575s
	I1205 11:52:27.965957   11382 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1205 11:52:28.199654   11382 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1205 11:52:28.199707   11382 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.967594583s
	I1205 11:52:28.199742   11382 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1205 11:52:28.453565   11382 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1205 11:52:28.453609   11382 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 4.221745542s
	I1205 11:52:28.453633   11382 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1205 11:52:30.393527   11382 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1205 11:52:30.393591   11382 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 6.161963583s
	I1205 11:52:30.393619   11382 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1205 11:52:30.890113   11382 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1205 11:52:30.890169   11382 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 6.658882084s
	I1205 11:52:30.890214   11382 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1205 11:52:31.533344   11382 start.go:360] acquireMachinesLock for no-preload-911000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:31.533855   11382 start.go:364] duration metric: took 416.666µs to acquireMachinesLock for "no-preload-911000"
	I1205 11:52:31.533952   11382 start.go:93] Provisioning new machine with config: &{Name:no-preload-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:52:31.534201   11382 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:52:31.539889   11382 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:52:31.589508   11382 start.go:159] libmachine.API.Create for "no-preload-911000" (driver="qemu2")
	I1205 11:52:31.589555   11382 client.go:168] LocalClient.Create starting
	I1205 11:52:31.589692   11382 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:52:31.589773   11382 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:31.589793   11382 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:31.589852   11382 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:52:31.589917   11382 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:31.589933   11382 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:31.590509   11382 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:52:31.754728   11382 main.go:141] libmachine: Creating SSH key...
	I1205 11:52:32.024861   11382 main.go:141] libmachine: Creating Disk image...
	I1205 11:52:32.024875   11382 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:52:32.025165   11382 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2
	I1205 11:52:32.036332   11382 main.go:141] libmachine: STDOUT: 
	I1205 11:52:32.036357   11382 main.go:141] libmachine: STDERR: 
	I1205 11:52:32.036419   11382 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2 +20000M
	I1205 11:52:32.045332   11382 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:52:32.045347   11382 main.go:141] libmachine: STDERR: 
	I1205 11:52:32.045366   11382 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2
	I1205 11:52:32.045371   11382 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:52:32.045376   11382 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:32.045425   11382 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:12:8f:8c:09:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2
	I1205 11:52:32.047379   11382 main.go:141] libmachine: STDOUT: 
	I1205 11:52:32.047394   11382 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:32.047407   11382 client.go:171] duration metric: took 457.942916ms to LocalClient.Create
	I1205 11:52:32.356723   11382 cache.go:157] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1205 11:52:32.356780   11382 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.125691958s
	I1205 11:52:32.356806   11382 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1205 11:52:32.356867   11382 cache.go:87] Successfully saved all images to host disk.
	I1205 11:52:34.049191   11382 start.go:128] duration metric: took 2.5154815s to createHost
	I1205 11:52:34.049371   11382 start.go:83] releasing machines lock for "no-preload-911000", held for 2.515881791s
	W1205 11:52:34.049666   11382 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:34.065324   11382 out.go:201] 
	W1205 11:52:34.070504   11382 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:34.070528   11382 out.go:270] * 
	* 
	W1205 11:52:34.073081   11382 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:52:34.081319   11382 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-911000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (71.604791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-911000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-911000 create -f testdata/busybox.yaml: exit status 1 (29.605917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-911000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-911000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (34.068625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (33.256334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-911000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-911000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-911000 describe deploy/metrics-server -n kube-system: exit status 1 (27.821458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-911000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-911000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (33.781916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-911000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-911000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.18776875s)

                                                
                                                
-- stdout --
	* [no-preload-911000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-911000" primary control-plane node in "no-preload-911000" cluster
	* Restarting existing qemu2 VM for "no-preload-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:52:36.421613   11450 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:52:36.421773   11450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:36.421776   11450 out.go:358] Setting ErrFile to fd 2...
	I1205 11:52:36.421779   11450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:36.421921   11450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:52:36.422991   11450 out.go:352] Setting JSON to false
	I1205 11:52:36.440715   11450 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6725,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:52:36.440788   11450 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:52:36.445063   11450 out.go:177] * [no-preload-911000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:52:36.452045   11450 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:52:36.452082   11450 notify.go:220] Checking for updates...
	I1205 11:52:36.457636   11450 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:52:36.461069   11450 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:52:36.464069   11450 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:52:36.467105   11450 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:52:36.470093   11450 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:52:36.473368   11450 config.go:182] Loaded profile config "no-preload-911000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:36.473631   11450 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:52:36.478065   11450 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:52:36.485054   11450 start.go:297] selected driver: qemu2
	I1205 11:52:36.485061   11450 start.go:901] validating driver "qemu2" against &{Name:no-preload-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:36.485127   11450 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:52:36.487585   11450 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:52:36.487609   11450 cni.go:84] Creating CNI manager for ""
	I1205 11:52:36.487631   11450 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:52:36.487661   11450 start.go:340] cluster config:
	{Name:no-preload-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-911000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:36.492060   11450 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.498997   11450 out.go:177] * Starting "no-preload-911000" primary control-plane node in "no-preload-911000" cluster
	I1205 11:52:36.503042   11450 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:52:36.503119   11450 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/no-preload-911000/config.json ...
	I1205 11:52:36.503146   11450 cache.go:107] acquiring lock: {Name:mk87b951b2b06333a174465532e0256cdd77d392 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.503169   11450 cache.go:107] acquiring lock: {Name:mk74dc6dc479b3065d11ea908863b4e2fb98f17c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.503178   11450 cache.go:107] acquiring lock: {Name:mk7219f76a22d5b096826cbf727e0bb07efaf64f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.503252   11450 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1205 11:52:36.503259   11450 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 117.75µs
	I1205 11:52:36.503264   11450 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1205 11:52:36.503259   11450 cache.go:107] acquiring lock: {Name:mkbb5eeb0dc79c12551e6c3c9ea52fcde4e3662e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.503270   11450 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 119.792µs
	I1205 11:52:36.503254   11450 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1205 11:52:36.503278   11450 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1205 11:52:36.503271   11450 cache.go:107] acquiring lock: {Name:mk2ffebbc051c993bb191ba8e7efbca53cdbf72b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.503280   11450 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 111.708µs
	I1205 11:52:36.503284   11450 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1205 11:52:36.503265   11450 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1205 11:52:36.503146   11450 cache.go:107] acquiring lock: {Name:mkf2f9504745b78223a295d4db642411c341d99c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.503317   11450 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1205 11:52:36.503321   11450 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 62.083µs
	I1205 11:52:36.503328   11450 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1205 11:52:36.503294   11450 cache.go:107] acquiring lock: {Name:mk0815619acb2188fc7cb8d1aebe01cfbad60b71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.503339   11450 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1205 11:52:36.503384   11450 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 110.792µs
	I1205 11:52:36.503389   11450 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1205 11:52:36.503344   11450 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 11:52:36.503393   11450 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 252.875µs
	I1205 11:52:36.503397   11450 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 11:52:36.503344   11450 cache.go:107] acquiring lock: {Name:mk488c700ecee4217e810749b3c8e1a89a848a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:36.503412   11450 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1205 11:52:36.503418   11450 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 148.667µs
	I1205 11:52:36.503421   11450 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1205 11:52:36.503450   11450 cache.go:115] /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1205 11:52:36.503454   11450 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 157.75µs
	I1205 11:52:36.503457   11450 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1205 11:52:36.503462   11450 cache.go:87] Successfully saved all images to host disk.
	I1205 11:52:36.503574   11450 start.go:360] acquireMachinesLock for no-preload-911000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:36.503605   11450 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "no-preload-911000"
	I1205 11:52:36.503613   11450 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:52:36.503618   11450 fix.go:54] fixHost starting: 
	I1205 11:52:36.503734   11450 fix.go:112] recreateIfNeeded on no-preload-911000: state=Stopped err=<nil>
	W1205 11:52:36.503743   11450 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:52:36.511024   11450 out.go:177] * Restarting existing qemu2 VM for "no-preload-911000" ...
	I1205 11:52:36.513985   11450 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:36.514023   11450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:12:8f:8c:09:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2
	I1205 11:52:36.516342   11450 main.go:141] libmachine: STDOUT: 
	I1205 11:52:36.516365   11450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:36.516395   11450 fix.go:56] duration metric: took 12.780417ms for fixHost
	I1205 11:52:36.516400   11450 start.go:83] releasing machines lock for "no-preload-911000", held for 12.793667ms
	W1205 11:52:36.516407   11450 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:36.516447   11450 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:36.516453   11450 start.go:729] Will try again in 5 seconds ...
	I1205 11:52:41.517946   11450 start.go:360] acquireMachinesLock for no-preload-911000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:41.518370   11450 start.go:364] duration metric: took 346.417µs to acquireMachinesLock for "no-preload-911000"
	I1205 11:52:41.518512   11450 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:52:41.518531   11450 fix.go:54] fixHost starting: 
	I1205 11:52:41.519278   11450 fix.go:112] recreateIfNeeded on no-preload-911000: state=Stopped err=<nil>
	W1205 11:52:41.519309   11450 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:52:41.523851   11450 out.go:177] * Restarting existing qemu2 VM for "no-preload-911000" ...
	I1205 11:52:41.530764   11450 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:41.530990   11450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:12:8f:8c:09:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/no-preload-911000/disk.qcow2
	I1205 11:52:41.540724   11450 main.go:141] libmachine: STDOUT: 
	I1205 11:52:41.540795   11450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:41.540849   11450 fix.go:56] duration metric: took 22.324542ms for fixHost
	I1205 11:52:41.540866   11450 start.go:83] releasing machines lock for "no-preload-911000", held for 22.474875ms
	W1205 11:52:41.541049   11450 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:41.549783   11450 out.go:201] 
	W1205 11:52:41.552842   11450 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:41.552874   11450 out.go:270] * 
	* 
	W1205 11:52:41.555316   11450 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:52:41.563752   11450 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-911000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (73.523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-911000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (34.770167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-911000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-911000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-911000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.190458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-911000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-911000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (33.359375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-911000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (33.378708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-911000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-911000 --alsologtostderr -v=1: exit status 83 (44.089042ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-911000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:52:41.857437   11472 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:52:41.857632   11472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:41.857635   11472 out.go:358] Setting ErrFile to fd 2...
	I1205 11:52:41.857638   11472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:41.857766   11472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:52:41.857982   11472 out.go:352] Setting JSON to false
	I1205 11:52:41.857989   11472 mustload.go:65] Loading cluster: no-preload-911000
	I1205 11:52:41.858220   11472 config.go:182] Loaded profile config "no-preload-911000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:41.861700   11472 out.go:177] * The control-plane node no-preload-911000 host is not running: state=Stopped
	I1205 11:52:41.865667   11472 out.go:177]   To start a cluster, run: "minikube start -p no-preload-911000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-911000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (33.769834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (34.085792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-541000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-541000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.751581917s)

                                                
                                                
-- stdout --
	* [embed-certs-541000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-541000" primary control-plane node in "embed-certs-541000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-541000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:52:42.195000   11489 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:52:42.195143   11489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:42.195147   11489 out.go:358] Setting ErrFile to fd 2...
	I1205 11:52:42.195149   11489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:42.195308   11489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:52:42.196429   11489 out.go:352] Setting JSON to false
	I1205 11:52:42.214229   11489 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6731,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:52:42.214329   11489 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:52:42.219640   11489 out.go:177] * [embed-certs-541000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:52:42.226603   11489 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:52:42.226632   11489 notify.go:220] Checking for updates...
	I1205 11:52:42.233624   11489 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:52:42.236570   11489 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:52:42.239669   11489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:52:42.242588   11489 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:52:42.245590   11489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:52:42.248988   11489 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:42.249051   11489 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:42.249101   11489 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:52:42.253530   11489 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:52:42.260608   11489 start.go:297] selected driver: qemu2
	I1205 11:52:42.260616   11489 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:52:42.260623   11489 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:52:42.263179   11489 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:52:42.266615   11489 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:52:42.269647   11489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:52:42.269672   11489 cni.go:84] Creating CNI manager for ""
	I1205 11:52:42.269694   11489 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:52:42.269699   11489 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:52:42.269747   11489 start.go:340] cluster config:
	{Name:embed-certs-541000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:42.274426   11489 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:42.277676   11489 out.go:177] * Starting "embed-certs-541000" primary control-plane node in "embed-certs-541000" cluster
	I1205 11:52:42.281575   11489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:52:42.281591   11489 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:52:42.281610   11489 cache.go:56] Caching tarball of preloaded images
	I1205 11:52:42.281700   11489 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:52:42.281706   11489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:52:42.281784   11489 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/embed-certs-541000/config.json ...
	I1205 11:52:42.281796   11489 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/embed-certs-541000/config.json: {Name:mk04c7c5ecb5f99475921972122c318722afdf89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:52:42.282088   11489 start.go:360] acquireMachinesLock for embed-certs-541000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:42.282134   11489 start.go:364] duration metric: took 39.959µs to acquireMachinesLock for "embed-certs-541000"
	I1205 11:52:42.282144   11489 start.go:93] Provisioning new machine with config: &{Name:embed-certs-541000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:52:42.282175   11489 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:52:42.286452   11489 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:52:42.303298   11489 start.go:159] libmachine.API.Create for "embed-certs-541000" (driver="qemu2")
	I1205 11:52:42.303326   11489 client.go:168] LocalClient.Create starting
	I1205 11:52:42.303401   11489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:52:42.303435   11489 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:42.303447   11489 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:42.303486   11489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:52:42.303515   11489 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:42.303525   11489 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:42.303900   11489 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:52:42.459393   11489 main.go:141] libmachine: Creating SSH key...
	I1205 11:52:42.493268   11489 main.go:141] libmachine: Creating Disk image...
	I1205 11:52:42.493274   11489 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:52:42.493464   11489 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2
	I1205 11:52:42.503901   11489 main.go:141] libmachine: STDOUT: 
	I1205 11:52:42.503918   11489 main.go:141] libmachine: STDERR: 
	I1205 11:52:42.503967   11489 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2 +20000M
	I1205 11:52:42.512461   11489 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:52:42.512481   11489 main.go:141] libmachine: STDERR: 
	I1205 11:52:42.512506   11489 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2
	I1205 11:52:42.512513   11489 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:52:42.512527   11489 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:42.512558   11489 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:c0:3c:3f:6a:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2
	I1205 11:52:42.514323   11489 main.go:141] libmachine: STDOUT: 
	I1205 11:52:42.514336   11489 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:42.514364   11489 client.go:171] duration metric: took 211.056042ms to LocalClient.Create
	I1205 11:52:44.516334   11489 start.go:128] duration metric: took 2.234383791s to createHost
	I1205 11:52:44.516390   11489 start.go:83] releasing machines lock for "embed-certs-541000", held for 2.234493625s
	W1205 11:52:44.516454   11489 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:44.532612   11489 out.go:177] * Deleting "embed-certs-541000" in qemu2 ...
	W1205 11:52:44.557802   11489 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:44.557833   11489 start.go:729] Will try again in 5 seconds ...
	I1205 11:52:49.559564   11489 start.go:360] acquireMachinesLock for embed-certs-541000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:49.560130   11489 start.go:364] duration metric: took 469.584µs to acquireMachinesLock for "embed-certs-541000"
	I1205 11:52:49.560260   11489 start.go:93] Provisioning new machine with config: &{Name:embed-certs-541000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:52:49.560537   11489 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:52:49.574403   11489 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:52:49.624104   11489 start.go:159] libmachine.API.Create for "embed-certs-541000" (driver="qemu2")
	I1205 11:52:49.624159   11489 client.go:168] LocalClient.Create starting
	I1205 11:52:49.624289   11489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:52:49.624363   11489 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:49.624396   11489 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:49.624451   11489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:52:49.624508   11489 main.go:141] libmachine: Decoding PEM data...
	I1205 11:52:49.624518   11489 main.go:141] libmachine: Parsing certificate...
	I1205 11:52:49.625080   11489 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:52:49.791869   11489 main.go:141] libmachine: Creating SSH key...
	I1205 11:52:49.846060   11489 main.go:141] libmachine: Creating Disk image...
	I1205 11:52:49.846065   11489 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:52:49.846262   11489 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2
	I1205 11:52:49.856280   11489 main.go:141] libmachine: STDOUT: 
	I1205 11:52:49.856302   11489 main.go:141] libmachine: STDERR: 
	I1205 11:52:49.856355   11489 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2 +20000M
	I1205 11:52:49.864887   11489 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:52:49.864903   11489 main.go:141] libmachine: STDERR: 
	I1205 11:52:49.864921   11489 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2
	I1205 11:52:49.864928   11489 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:52:49.864938   11489 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:49.864970   11489 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:37:7b:1a:ff:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2
	I1205 11:52:49.866746   11489 main.go:141] libmachine: STDOUT: 
	I1205 11:52:49.866759   11489 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:49.866772   11489 client.go:171] duration metric: took 242.626042ms to LocalClient.Create
	I1205 11:52:51.868808   11489 start.go:128] duration metric: took 2.308410042s to createHost
	I1205 11:52:51.868856   11489 start.go:83] releasing machines lock for "embed-certs-541000", held for 2.308873s
	W1205 11:52:51.869284   11489 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-541000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-541000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:51.882892   11489 out.go:201] 
	W1205 11:52:51.887053   11489 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:51.887082   11489 out.go:270] * 
	* 
	W1205 11:52:51.889580   11489 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:52:51.899908   11489 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-541000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (73.458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-541000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-541000 create -f testdata/busybox.yaml: exit status 1 (29.1135ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-541000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-541000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (33.733333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (34.220875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-541000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-541000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-541000 describe deploy/metrics-server -n kube-system: exit status 1 (27.410292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-541000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-541000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (34.217125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-541000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-541000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.190978416s)

                                                
                                                
-- stdout --
	* [embed-certs-541000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-541000" primary control-plane node in "embed-certs-541000" cluster
	* Restarting existing qemu2 VM for "embed-certs-541000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-541000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:52:55.629670   11537 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:52:55.629862   11537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:55.629865   11537 out.go:358] Setting ErrFile to fd 2...
	I1205 11:52:55.629867   11537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:52:55.629999   11537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:52:55.631172   11537 out.go:352] Setting JSON to false
	I1205 11:52:55.648740   11537 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6744,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:52:55.648804   11537 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:52:55.653816   11537 out.go:177] * [embed-certs-541000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:52:55.660746   11537 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:52:55.660813   11537 notify.go:220] Checking for updates...
	I1205 11:52:55.666716   11537 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:52:55.669744   11537 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:52:55.671203   11537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:52:55.674691   11537 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:52:55.677740   11537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:52:55.685839   11537 config.go:182] Loaded profile config "embed-certs-541000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:52:55.686116   11537 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:52:55.690756   11537 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:52:55.697548   11537 start.go:297] selected driver: qemu2
	I1205 11:52:55.697554   11537 start.go:901] validating driver "qemu2" against &{Name:embed-certs-541000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:55.697609   11537 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:52:55.700340   11537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:52:55.700362   11537 cni.go:84] Creating CNI manager for ""
	I1205 11:52:55.700385   11537 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:52:55.700411   11537 start.go:340] cluster config:
	{Name:embed-certs-541000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-541000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:52:55.705093   11537 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:52:55.712728   11537 out.go:177] * Starting "embed-certs-541000" primary control-plane node in "embed-certs-541000" cluster
	I1205 11:52:55.716679   11537 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:52:55.716696   11537 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:52:55.716712   11537 cache.go:56] Caching tarball of preloaded images
	I1205 11:52:55.716798   11537 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:52:55.716804   11537 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:52:55.716880   11537 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/embed-certs-541000/config.json ...
	I1205 11:52:55.717279   11537 start.go:360] acquireMachinesLock for embed-certs-541000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:52:55.717315   11537 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "embed-certs-541000"
	I1205 11:52:55.717324   11537 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:52:55.717330   11537 fix.go:54] fixHost starting: 
	I1205 11:52:55.717467   11537 fix.go:112] recreateIfNeeded on embed-certs-541000: state=Stopped err=<nil>
	W1205 11:52:55.717475   11537 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:52:55.723677   11537 out.go:177] * Restarting existing qemu2 VM for "embed-certs-541000" ...
	I1205 11:52:55.727817   11537 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:52:55.727866   11537 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:37:7b:1a:ff:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2
	I1205 11:52:55.730205   11537 main.go:141] libmachine: STDOUT: 
	I1205 11:52:55.730227   11537 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:52:55.730260   11537 fix.go:56] duration metric: took 12.9305ms for fixHost
	I1205 11:52:55.730266   11537 start.go:83] releasing machines lock for "embed-certs-541000", held for 12.947125ms
	W1205 11:52:55.730272   11537 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:52:55.730309   11537 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:52:55.730314   11537 start.go:729] Will try again in 5 seconds ...
	I1205 11:53:00.732196   11537 start.go:360] acquireMachinesLock for embed-certs-541000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:00.732667   11537 start.go:364] duration metric: took 390.042µs to acquireMachinesLock for "embed-certs-541000"
	I1205 11:53:00.732802   11537 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:53:00.732820   11537 fix.go:54] fixHost starting: 
	I1205 11:53:00.733487   11537 fix.go:112] recreateIfNeeded on embed-certs-541000: state=Stopped err=<nil>
	W1205 11:53:00.733515   11537 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:53:00.737959   11537 out.go:177] * Restarting existing qemu2 VM for "embed-certs-541000" ...
	I1205 11:53:00.742004   11537 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:00.742186   11537 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:37:7b:1a:ff:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/embed-certs-541000/disk.qcow2
	I1205 11:53:00.752256   11537 main.go:141] libmachine: STDOUT: 
	I1205 11:53:00.752318   11537 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:00.752400   11537 fix.go:56] duration metric: took 19.582083ms for fixHost
	I1205 11:53:00.752417   11537 start.go:83] releasing machines lock for "embed-certs-541000", held for 19.728125ms
	W1205 11:53:00.752633   11537 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-541000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-541000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:00.760002   11537 out.go:201] 
	W1205 11:53:00.764004   11537 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:53:00.764027   11537 out.go:270] * 
	* 
	W1205 11:53:00.766338   11537 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:53:00.773934   11537 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-541000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (70.696709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-541000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (35.287667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-541000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-541000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-541000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.128208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-541000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-541000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (32.815708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-541000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (32.761708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-541000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-541000 --alsologtostderr -v=1: exit status 83 (42.728167ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-541000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-541000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:53:01.062832   11563 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:53:01.063043   11563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:01.063046   11563 out.go:358] Setting ErrFile to fd 2...
	I1205 11:53:01.063049   11563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:01.063191   11563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:53:01.063429   11563 out.go:352] Setting JSON to false
	I1205 11:53:01.063437   11563 mustload.go:65] Loading cluster: embed-certs-541000
	I1205 11:53:01.063687   11563 config.go:182] Loaded profile config "embed-certs-541000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:01.066678   11563 out.go:177] * The control-plane node embed-certs-541000 host is not running: state=Stopped
	I1205 11:53:01.070687   11563 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-541000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-541000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (33.444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (33.087375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-675000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-675000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.869713333s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-675000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-675000" primary control-plane node in "default-k8s-diff-port-675000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-675000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:53:01.516550   11587 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:53:01.516720   11587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:01.516723   11587 out.go:358] Setting ErrFile to fd 2...
	I1205 11:53:01.516726   11587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:01.516856   11587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:53:01.518058   11587 out.go:352] Setting JSON to false
	I1205 11:53:01.535604   11587 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6750,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:53:01.535684   11587 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:53:01.540754   11587 out.go:177] * [default-k8s-diff-port-675000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:53:01.546702   11587 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:53:01.546761   11587 notify.go:220] Checking for updates...
	I1205 11:53:01.553588   11587 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:53:01.556608   11587 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:53:01.559641   11587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:53:01.561198   11587 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:53:01.564630   11587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:53:01.568063   11587 config.go:182] Loaded profile config "cert-expiration-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:01.568134   11587 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:01.568191   11587 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:53:01.572485   11587 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:53:01.579626   11587 start.go:297] selected driver: qemu2
	I1205 11:53:01.579633   11587 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:53:01.579639   11587 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:53:01.582168   11587 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:53:01.585738   11587 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:53:01.588685   11587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:53:01.588702   11587 cni.go:84] Creating CNI manager for ""
	I1205 11:53:01.588723   11587 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:53:01.588727   11587 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:53:01.588769   11587 start.go:340] cluster config:
	{Name:default-k8s-diff-port-675000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-675000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:53:01.593494   11587 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:53:01.599623   11587 out.go:177] * Starting "default-k8s-diff-port-675000" primary control-plane node in "default-k8s-diff-port-675000" cluster
	I1205 11:53:01.603625   11587 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:53:01.603650   11587 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:53:01.603657   11587 cache.go:56] Caching tarball of preloaded images
	I1205 11:53:01.603739   11587 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:53:01.603745   11587 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:53:01.603801   11587 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/default-k8s-diff-port-675000/config.json ...
	I1205 11:53:01.603814   11587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/default-k8s-diff-port-675000/config.json: {Name:mkdf998d8fb11d8e9d4c466037a58ab3f0b33cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:53:01.604058   11587 start.go:360] acquireMachinesLock for default-k8s-diff-port-675000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:01.604106   11587 start.go:364] duration metric: took 39.833µs to acquireMachinesLock for "default-k8s-diff-port-675000"
	I1205 11:53:01.604117   11587 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-675000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:53:01.604145   11587 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:53:01.611641   11587 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:53:01.628893   11587 start.go:159] libmachine.API.Create for "default-k8s-diff-port-675000" (driver="qemu2")
	I1205 11:53:01.628918   11587 client.go:168] LocalClient.Create starting
	I1205 11:53:01.628989   11587 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:53:01.629025   11587 main.go:141] libmachine: Decoding PEM data...
	I1205 11:53:01.629038   11587 main.go:141] libmachine: Parsing certificate...
	I1205 11:53:01.629074   11587 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:53:01.629103   11587 main.go:141] libmachine: Decoding PEM data...
	I1205 11:53:01.629109   11587 main.go:141] libmachine: Parsing certificate...
	I1205 11:53:01.629574   11587 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:53:01.785343   11587 main.go:141] libmachine: Creating SSH key...
	I1205 11:53:01.840121   11587 main.go:141] libmachine: Creating Disk image...
	I1205 11:53:01.840127   11587 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:53:01.840328   11587 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2
	I1205 11:53:01.850355   11587 main.go:141] libmachine: STDOUT: 
	I1205 11:53:01.850387   11587 main.go:141] libmachine: STDERR: 
	I1205 11:53:01.850443   11587 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2 +20000M
	I1205 11:53:01.858965   11587 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:53:01.858979   11587 main.go:141] libmachine: STDERR: 
	I1205 11:53:01.858996   11587 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2
	I1205 11:53:01.859005   11587 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:53:01.859017   11587 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:01.859058   11587 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e8:43:43:2e:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2
	I1205 11:53:01.860881   11587 main.go:141] libmachine: STDOUT: 
	I1205 11:53:01.860894   11587 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:01.860913   11587 client.go:171] duration metric: took 232ms to LocalClient.Create
	I1205 11:53:03.862986   11587 start.go:128] duration metric: took 2.258916458s to createHost
	I1205 11:53:03.863039   11587 start.go:83] releasing machines lock for "default-k8s-diff-port-675000", held for 2.259015625s
	W1205 11:53:03.863082   11587 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:03.876902   11587 out.go:177] * Deleting "default-k8s-diff-port-675000" in qemu2 ...
	W1205 11:53:03.903598   11587 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:03.903630   11587 start.go:729] Will try again in 5 seconds ...
	I1205 11:53:08.905695   11587 start.go:360] acquireMachinesLock for default-k8s-diff-port-675000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:08.906282   11587 start.go:364] duration metric: took 478.625µs to acquireMachinesLock for "default-k8s-diff-port-675000"
	I1205 11:53:08.906425   11587 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-675000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:53:08.906678   11587 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:53:08.915236   11587 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:53:08.965549   11587 start.go:159] libmachine.API.Create for "default-k8s-diff-port-675000" (driver="qemu2")
	I1205 11:53:08.965604   11587 client.go:168] LocalClient.Create starting
	I1205 11:53:08.965761   11587 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:53:08.965847   11587 main.go:141] libmachine: Decoding PEM data...
	I1205 11:53:08.965874   11587 main.go:141] libmachine: Parsing certificate...
	I1205 11:53:08.965943   11587 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:53:08.966001   11587 main.go:141] libmachine: Decoding PEM data...
	I1205 11:53:08.966018   11587 main.go:141] libmachine: Parsing certificate...
	I1205 11:53:08.966661   11587 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:53:09.139678   11587 main.go:141] libmachine: Creating SSH key...
	I1205 11:53:09.288975   11587 main.go:141] libmachine: Creating Disk image...
	I1205 11:53:09.288982   11587 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:53:09.289192   11587 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2
	I1205 11:53:09.299472   11587 main.go:141] libmachine: STDOUT: 
	I1205 11:53:09.299493   11587 main.go:141] libmachine: STDERR: 
	I1205 11:53:09.299560   11587 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2 +20000M
	I1205 11:53:09.308190   11587 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:53:09.308210   11587 main.go:141] libmachine: STDERR: 
	I1205 11:53:09.308220   11587 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2
	I1205 11:53:09.308229   11587 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:53:09.308239   11587 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:09.308266   11587 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:9f:4f:71:42:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2
	I1205 11:53:09.310059   11587 main.go:141] libmachine: STDOUT: 
	I1205 11:53:09.310073   11587 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:09.310086   11587 client.go:171] duration metric: took 344.486167ms to LocalClient.Create
	I1205 11:53:11.312212   11587 start.go:128] duration metric: took 2.405578666s to createHost
	I1205 11:53:11.312259   11587 start.go:83] releasing machines lock for "default-k8s-diff-port-675000", held for 2.406023208s
	W1205 11:53:11.312553   11587 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-675000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-675000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:11.321352   11587 out.go:201] 
	W1205 11:53:11.327490   11587 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:53:11.327545   11587 out.go:270] * 
	* 
	W1205 11:53:11.330173   11587 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:53:11.339386   11587 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-675000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (70.532291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-535000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-535000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.713728917s)

                                                
                                                
-- stdout --
	* [newest-cni-535000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-535000" primary control-plane node in "newest-cni-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:53:04.217778   11603 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:53:04.217953   11603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:04.217957   11603 out.go:358] Setting ErrFile to fd 2...
	I1205 11:53:04.217959   11603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:04.218089   11603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:53:04.219231   11603 out.go:352] Setting JSON to false
	I1205 11:53:04.236817   11603 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6753,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:53:04.236891   11603 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:53:04.243499   11603 out.go:177] * [newest-cni-535000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:53:04.250445   11603 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:53:04.250496   11603 notify.go:220] Checking for updates...
	I1205 11:53:04.256467   11603 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:53:04.259461   11603 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:53:04.262520   11603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:53:04.265445   11603 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:53:04.268473   11603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:53:04.271852   11603 config.go:182] Loaded profile config "default-k8s-diff-port-675000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:04.271925   11603 config.go:182] Loaded profile config "multinode-681000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:04.271977   11603 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:53:04.275400   11603 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:53:04.282476   11603 start.go:297] selected driver: qemu2
	I1205 11:53:04.282483   11603 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:53:04.282494   11603 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:53:04.284998   11603 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1205 11:53:04.285039   11603 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 11:53:04.292452   11603 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:53:04.295503   11603 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 11:53:04.295519   11603 cni.go:84] Creating CNI manager for ""
	I1205 11:53:04.295548   11603 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:53:04.295552   11603 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:53:04.295587   11603 start.go:340] cluster config:
	{Name:newest-cni-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:53:04.300267   11603 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:53:04.307440   11603 out.go:177] * Starting "newest-cni-535000" primary control-plane node in "newest-cni-535000" cluster
	I1205 11:53:04.311454   11603 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:53:04.311468   11603 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:53:04.311476   11603 cache.go:56] Caching tarball of preloaded images
	I1205 11:53:04.311560   11603 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:53:04.311565   11603 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:53:04.311618   11603 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/newest-cni-535000/config.json ...
	I1205 11:53:04.311630   11603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/newest-cni-535000/config.json: {Name:mk4f661194ce019c89940614924dbbfea52db1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:53:04.311997   11603 start.go:360] acquireMachinesLock for newest-cni-535000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:04.312050   11603 start.go:364] duration metric: took 46.167µs to acquireMachinesLock for "newest-cni-535000"
	I1205 11:53:04.312062   11603 start.go:93] Provisioning new machine with config: &{Name:newest-cni-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:53:04.312096   11603 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:53:04.320474   11603 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:53:04.339467   11603 start.go:159] libmachine.API.Create for "newest-cni-535000" (driver="qemu2")
	I1205 11:53:04.339508   11603 client.go:168] LocalClient.Create starting
	I1205 11:53:04.339590   11603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:53:04.339628   11603 main.go:141] libmachine: Decoding PEM data...
	I1205 11:53:04.339639   11603 main.go:141] libmachine: Parsing certificate...
	I1205 11:53:04.339679   11603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:53:04.339713   11603 main.go:141] libmachine: Decoding PEM data...
	I1205 11:53:04.339723   11603 main.go:141] libmachine: Parsing certificate...
	I1205 11:53:04.340155   11603 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:53:04.495651   11603 main.go:141] libmachine: Creating SSH key...
	I1205 11:53:04.532977   11603 main.go:141] libmachine: Creating Disk image...
	I1205 11:53:04.532982   11603 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:53:04.533191   11603 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2
	I1205 11:53:04.543240   11603 main.go:141] libmachine: STDOUT: 
	I1205 11:53:04.543267   11603 main.go:141] libmachine: STDERR: 
	I1205 11:53:04.543328   11603 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2 +20000M
	I1205 11:53:04.551953   11603 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:53:04.551969   11603 main.go:141] libmachine: STDERR: 
	I1205 11:53:04.551993   11603 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2
	I1205 11:53:04.551999   11603 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:53:04.552009   11603 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:04.552040   11603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:a3:f4:eb:6c:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2
	I1205 11:53:04.553843   11603 main.go:141] libmachine: STDOUT: 
	I1205 11:53:04.553857   11603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:04.553881   11603 client.go:171] duration metric: took 214.375584ms to LocalClient.Create
	I1205 11:53:06.555970   11603 start.go:128] duration metric: took 2.24393925s to createHost
	I1205 11:53:06.556028   11603 start.go:83] releasing machines lock for "newest-cni-535000", held for 2.244042584s
	W1205 11:53:06.556123   11603 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:06.568832   11603 out.go:177] * Deleting "newest-cni-535000" in qemu2 ...
	W1205 11:53:06.594384   11603 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:06.594412   11603 start.go:729] Will try again in 5 seconds ...
	I1205 11:53:11.594397   11603 start.go:360] acquireMachinesLock for newest-cni-535000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:11.594487   11603 start.go:364] duration metric: took 69.916µs to acquireMachinesLock for "newest-cni-535000"
	I1205 11:53:11.594518   11603 start.go:93] Provisioning new machine with config: &{Name:newest-cni-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:53:11.594564   11603 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:53:11.600947   11603 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:53:11.616792   11603 start.go:159] libmachine.API.Create for "newest-cni-535000" (driver="qemu2")
	I1205 11:53:11.616819   11603 client.go:168] LocalClient.Create starting
	I1205 11:53:11.616877   11603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/ca.pem
	I1205 11:53:11.616906   11603 main.go:141] libmachine: Decoding PEM data...
	I1205 11:53:11.616916   11603 main.go:141] libmachine: Parsing certificate...
	I1205 11:53:11.616953   11603 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20053-7409/.minikube/certs/cert.pem
	I1205 11:53:11.616970   11603 main.go:141] libmachine: Decoding PEM data...
	I1205 11:53:11.616979   11603 main.go:141] libmachine: Parsing certificate...
	I1205 11:53:11.617314   11603 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:53:11.787479   11603 main.go:141] libmachine: Creating SSH key...
	I1205 11:53:11.831144   11603 main.go:141] libmachine: Creating Disk image...
	I1205 11:53:11.831150   11603 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:53:11.831351   11603 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2
	I1205 11:53:11.841385   11603 main.go:141] libmachine: STDOUT: 
	I1205 11:53:11.841403   11603 main.go:141] libmachine: STDERR: 
	I1205 11:53:11.841465   11603 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2 +20000M
	I1205 11:53:11.850025   11603 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:53:11.850042   11603 main.go:141] libmachine: STDERR: 
	I1205 11:53:11.850053   11603 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2
	I1205 11:53:11.850058   11603 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:53:11.850069   11603 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:11.850109   11603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f0:d6:13:4b:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2
	I1205 11:53:11.851955   11603 main.go:141] libmachine: STDOUT: 
	I1205 11:53:11.851981   11603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:11.851995   11603 client.go:171] duration metric: took 235.17925ms to LocalClient.Create
	I1205 11:53:13.854190   11603 start.go:128] duration metric: took 2.259670375s to createHost
	I1205 11:53:13.854235   11603 start.go:83] releasing machines lock for "newest-cni-535000", held for 2.259798667s
	W1205 11:53:13.854615   11603 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:13.866372   11603 out.go:201] 
	W1205 11:53:13.870514   11603 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:53:13.870568   11603 out.go:270] * 
	* 
	W1205 11:53:13.873114   11603 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:53:13.884315   11603 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-535000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000: exit status 7 (69.724417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-535000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-675000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-675000 create -f testdata/busybox.yaml: exit status 1 (29.294834ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-675000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-675000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (33.497541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (33.064959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-675000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-675000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-675000 describe deploy/metrics-server -n kube-system: exit status 1 (28.4245ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-675000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-675000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (39.20175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-675000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-675000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.189169417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-675000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-675000" primary control-plane node in "default-k8s-diff-port-675000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-675000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-675000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:53:15.175180   11672 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:53:15.175338   11672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:15.175342   11672 out.go:358] Setting ErrFile to fd 2...
	I1205 11:53:15.175344   11672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:15.175492   11672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:53:15.176524   11672 out.go:352] Setting JSON to false
	I1205 11:53:15.194138   11672 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6764,"bootTime":1733421631,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:53:15.194210   11672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:53:15.199293   11672 out.go:177] * [default-k8s-diff-port-675000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:53:15.206231   11672 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:53:15.206323   11672 notify.go:220] Checking for updates...
	I1205 11:53:15.210667   11672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:53:15.213185   11672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:53:15.216178   11672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:53:15.219234   11672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:53:15.222267   11672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:53:15.225534   11672 config.go:182] Loaded profile config "default-k8s-diff-port-675000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:15.225805   11672 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:53:15.230226   11672 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:53:15.237200   11672 start.go:297] selected driver: qemu2
	I1205 11:53:15.237207   11672 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-675000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:53:15.237271   11672 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:53:15.239741   11672 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:53:15.239767   11672 cni.go:84] Creating CNI manager for ""
	I1205 11:53:15.239795   11672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:53:15.239820   11672 start.go:340] cluster config:
	{Name:default-k8s-diff-port-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-675000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:53:15.244413   11672 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:53:15.251159   11672 out.go:177] * Starting "default-k8s-diff-port-675000" primary control-plane node in "default-k8s-diff-port-675000" cluster
	I1205 11:53:15.255210   11672 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:53:15.255238   11672 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:53:15.255246   11672 cache.go:56] Caching tarball of preloaded images
	I1205 11:53:15.255325   11672 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:53:15.255330   11672 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:53:15.255381   11672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/default-k8s-diff-port-675000/config.json ...
	I1205 11:53:15.255798   11672 start.go:360] acquireMachinesLock for default-k8s-diff-port-675000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:15.255834   11672 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "default-k8s-diff-port-675000"
	I1205 11:53:15.255842   11672 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:53:15.255849   11672 fix.go:54] fixHost starting: 
	I1205 11:53:15.255964   11672 fix.go:112] recreateIfNeeded on default-k8s-diff-port-675000: state=Stopped err=<nil>
	W1205 11:53:15.255970   11672 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:53:15.259241   11672 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-675000" ...
	I1205 11:53:15.267202   11672 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:15.267254   11672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:9f:4f:71:42:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2
	I1205 11:53:15.269505   11672 main.go:141] libmachine: STDOUT: 
	I1205 11:53:15.269523   11672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:15.269554   11672 fix.go:56] duration metric: took 13.705417ms for fixHost
	I1205 11:53:15.269558   11672 start.go:83] releasing machines lock for "default-k8s-diff-port-675000", held for 13.719792ms
	W1205 11:53:15.269563   11672 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:53:15.269606   11672 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:15.269610   11672 start.go:729] Will try again in 5 seconds ...
	I1205 11:53:20.271814   11672 start.go:360] acquireMachinesLock for default-k8s-diff-port-675000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:20.272337   11672 start.go:364] duration metric: took 391.75µs to acquireMachinesLock for "default-k8s-diff-port-675000"
	I1205 11:53:20.272480   11672 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:53:20.272499   11672 fix.go:54] fixHost starting: 
	I1205 11:53:20.273236   11672 fix.go:112] recreateIfNeeded on default-k8s-diff-port-675000: state=Stopped err=<nil>
	W1205 11:53:20.273264   11672 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:53:20.278011   11672 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-675000" ...
	I1205 11:53:20.285906   11672 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:20.286112   11672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:9f:4f:71:42:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/default-k8s-diff-port-675000/disk.qcow2
	I1205 11:53:20.296720   11672 main.go:141] libmachine: STDOUT: 
	I1205 11:53:20.296766   11672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:20.296834   11672 fix.go:56] duration metric: took 24.334875ms for fixHost
	I1205 11:53:20.296852   11672 start.go:83] releasing machines lock for "default-k8s-diff-port-675000", held for 24.4925ms
	W1205 11:53:20.297005   11672 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-675000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-675000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:20.304882   11672 out.go:201] 
	W1205 11:53:20.309002   11672 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:53:20.309027   11672 out.go:270] * 
	* 
	W1205 11:53:20.311509   11672 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:53:20.318922   11672 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-675000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (70.689542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-535000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-535000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.190262459s)

                                                
                                                
-- stdout --
	* [newest-cni-535000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-535000" primary control-plane node in "newest-cni-535000" cluster
	* Restarting existing qemu2 VM for "newest-cni-535000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-535000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:53:17.276090   11693 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:53:17.276254   11693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:17.276257   11693 out.go:358] Setting ErrFile to fd 2...
	I1205 11:53:17.276260   11693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:17.276397   11693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:53:17.277454   11693 out.go:352] Setting JSON to false
	I1205 11:53:17.295141   11693 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6766,"bootTime":1733421631,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:53:17.295208   11693 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:53:17.299636   11693 out.go:177] * [newest-cni-535000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:53:17.307574   11693 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:53:17.307650   11693 notify.go:220] Checking for updates...
	I1205 11:53:17.313049   11693 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:53:17.315597   11693 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:53:17.318600   11693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:53:17.321620   11693 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:53:17.324653   11693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:53:17.327915   11693 config.go:182] Loaded profile config "newest-cni-535000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:17.328200   11693 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:53:17.332609   11693 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:53:17.339611   11693 start.go:297] selected driver: qemu2
	I1205 11:53:17.339617   11693 start.go:901] validating driver "qemu2" against &{Name:newest-cni-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:53:17.339670   11693 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:53:17.342073   11693 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 11:53:17.342094   11693 cni.go:84] Creating CNI manager for ""
	I1205 11:53:17.342113   11693 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:53:17.342134   11693 start.go:340] cluster config:
	{Name:newest-cni-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-535000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:53:17.346573   11693 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:53:17.353578   11693 out.go:177] * Starting "newest-cni-535000" primary control-plane node in "newest-cni-535000" cluster
	I1205 11:53:17.357634   11693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:53:17.357653   11693 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:53:17.357665   11693 cache.go:56] Caching tarball of preloaded images
	I1205 11:53:17.357731   11693 preload.go:172] Found /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:53:17.357737   11693 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:53:17.357805   11693 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/newest-cni-535000/config.json ...
	I1205 11:53:17.358220   11693 start.go:360] acquireMachinesLock for newest-cni-535000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:17.358252   11693 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "newest-cni-535000"
	I1205 11:53:17.358261   11693 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:53:17.358266   11693 fix.go:54] fixHost starting: 
	I1205 11:53:17.358386   11693 fix.go:112] recreateIfNeeded on newest-cni-535000: state=Stopped err=<nil>
	W1205 11:53:17.358394   11693 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:53:17.361602   11693 out.go:177] * Restarting existing qemu2 VM for "newest-cni-535000" ...
	I1205 11:53:17.369588   11693 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:17.369635   11693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f0:d6:13:4b:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2
	I1205 11:53:17.371966   11693 main.go:141] libmachine: STDOUT: 
	I1205 11:53:17.371987   11693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:17.372020   11693 fix.go:56] duration metric: took 13.752542ms for fixHost
	I1205 11:53:17.372024   11693 start.go:83] releasing machines lock for "newest-cni-535000", held for 13.767917ms
	W1205 11:53:17.372030   11693 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:53:17.372068   11693 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:17.372073   11693 start.go:729] Will try again in 5 seconds ...
	I1205 11:53:22.374150   11693 start.go:360] acquireMachinesLock for newest-cni-535000: {Name:mk7c70506bd102fb9c1f9f86c99ecc58cfd761ea Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:53:22.374770   11693 start.go:364] duration metric: took 516µs to acquireMachinesLock for "newest-cni-535000"
	I1205 11:53:22.374902   11693 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:53:22.374923   11693 fix.go:54] fixHost starting: 
	I1205 11:53:22.375793   11693 fix.go:112] recreateIfNeeded on newest-cni-535000: state=Stopped err=<nil>
	W1205 11:53:22.375819   11693 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:53:22.380338   11693 out.go:177] * Restarting existing qemu2 VM for "newest-cni-535000" ...
	I1205 11:53:22.387311   11693 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:53:22.387582   11693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f0:d6:13:4b:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20053-7409/.minikube/machines/newest-cni-535000/disk.qcow2
	I1205 11:53:22.398850   11693 main.go:141] libmachine: STDOUT: 
	I1205 11:53:22.398912   11693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:53:22.399014   11693 fix.go:56] duration metric: took 24.094084ms for fixHost
	I1205 11:53:22.399031   11693 start.go:83] releasing machines lock for "newest-cni-535000", held for 24.23875ms
	W1205 11:53:22.399228   11693 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-535000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-535000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:53:22.407272   11693 out.go:201] 
	W1205 11:53:22.410471   11693 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:53:22.410505   11693 out.go:270] * 
	* 
	W1205 11:53:22.412937   11693 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:53:22.425289   11693 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-535000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000: exit status 7 (72.334291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-535000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-675000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (35.260792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-675000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-675000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-675000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.4355ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-675000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-675000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (33.236625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-675000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (33.1705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-675000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-675000 --alsologtostderr -v=1: exit status 83 (45.562375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-675000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-675000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:53:20.611486   11712 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:53:20.611681   11712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:20.611684   11712 out.go:358] Setting ErrFile to fd 2...
	I1205 11:53:20.611686   11712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:20.611808   11712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:53:20.612038   11712 out.go:352] Setting JSON to false
	I1205 11:53:20.612045   11712 mustload.go:65] Loading cluster: default-k8s-diff-port-675000
	I1205 11:53:20.612271   11712 config.go:182] Loaded profile config "default-k8s-diff-port-675000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:20.616726   11712 out.go:177] * The control-plane node default-k8s-diff-port-675000 host is not running: state=Stopped
	I1205 11:53:20.620742   11712 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-675000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-675000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (33.1985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (33.074958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-535000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000: exit status 7 (33.800875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-535000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-535000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-535000 --alsologtostderr -v=1: exit status 83 (45.705792ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-535000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-535000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:53:22.619023   11736 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:53:22.619204   11736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:22.619207   11736 out.go:358] Setting ErrFile to fd 2...
	I1205 11:53:22.619210   11736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:53:22.619345   11736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:53:22.619580   11736 out.go:352] Setting JSON to false
	I1205 11:53:22.619587   11736 mustload.go:65] Loading cluster: newest-cni-535000
	I1205 11:53:22.619793   11736 config.go:182] Loaded profile config "newest-cni-535000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:53:22.624335   11736 out.go:177] * The control-plane node newest-cni-535000 host is not running: state=Stopped
	I1205 11:53:22.628309   11736 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-535000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-535000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000: exit status 7 (34.166417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-535000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000: exit status 7 (33.597083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-535000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 8.39
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.09
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.96
39 TestErrorSpam/start 0.4
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 10.84
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.99
55 TestFunctional/serial/CacheCmd/cache/add_local 1.06
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.25
71 TestFunctional/parallel/DryRun 0.24
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.29
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
107 TestFunctional/parallel/ProfileCmd/profile_list 0.09
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
112 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/ImageCommands/Setup 1.93
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 2.61
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.04
238 TestStoppedBinaryUpgrade/Setup 1.08
240 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
256 TestNoKubernetes/serial/ProfileList 0.13
258 TestNoKubernetes/serial/Stop 3.33
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
275 TestStartStop/group/old-k8s-version/serial/Stop 3.49
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 1.87
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
297 TestStartStop/group/embed-certs/serial/Stop 3.26
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.37
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
313 TestStartStop/group/newest-cni/serial/Stop 3.08
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1205 11:27:35.867428    7922 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1205 11:27:35.867826    7922 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-019000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-019000: exit status 85 (101.520833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |          |
	|         | -p download-only-019000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 11:27:20
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 11:27:20.110859    7923 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:27:20.111031    7923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:27:20.111034    7923 out.go:358] Setting ErrFile to fd 2...
	I1205 11:27:20.111037    7923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:27:20.111165    7923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	W1205 11:27:20.111262    7923 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20053-7409/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20053-7409/.minikube/config/config.json: no such file or directory
	I1205 11:27:20.112844    7923 out.go:352] Setting JSON to true
	I1205 11:27:20.131258    7923 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5209,"bootTime":1733421631,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:27:20.131340    7923 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:27:20.137011    7923 out.go:97] [download-only-019000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:27:20.137145    7923 notify.go:220] Checking for updates...
	W1205 11:27:20.137207    7923 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 11:27:20.140131    7923 out.go:169] MINIKUBE_LOCATION=20053
	I1205 11:27:20.143167    7923 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:27:20.147985    7923 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:27:20.151102    7923 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:27:20.154151    7923 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	W1205 11:27:20.160147    7923 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 11:27:20.160454    7923 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:27:20.163069    7923 out.go:97] Using the qemu2 driver based on user configuration
	I1205 11:27:20.163088    7923 start.go:297] selected driver: qemu2
	I1205 11:27:20.163101    7923 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:27:20.163190    7923 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:27:20.166088    7923 out.go:169] Automatically selected the socket_vmnet network
	I1205 11:27:20.171590    7923 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1205 11:27:20.171687    7923 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:27:20.171736    7923 cni.go:84] Creating CNI manager for ""
	I1205 11:27:20.171774    7923 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 11:27:20.171836    7923 start.go:340] cluster config:
	{Name:download-only-019000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-019000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:27:20.176413    7923 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:27:20.179124    7923 out.go:97] Downloading VM boot image ...
	I1205 11:27:20.179139    7923 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1205 11:27:27.606620    7923 out.go:97] Starting "download-only-019000" primary control-plane node in "download-only-019000" cluster
	I1205 11:27:27.606645    7923 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:27:27.667782    7923 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:27:27.667805    7923 cache.go:56] Caching tarball of preloaded images
	I1205 11:27:27.668047    7923 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:27:27.673321    7923 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1205 11:27:27.673328    7923 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 11:27:27.755775    7923 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:27:34.575513    7923 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 11:27:34.575694    7923 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 11:27:35.270172    7923 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 11:27:35.270361    7923 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/download-only-019000/config.json ...
	I1205 11:27:35.270378    7923 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20053-7409/.minikube/profiles/download-only-019000/config.json: {Name:mkb66e6542a11c8b8c37524c92ae54d6c9226a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:27:35.270660    7923 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:27:35.270914    7923 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1205 11:27:35.818909    7923 out.go:193] 
	W1205 11:27:35.822952    7923 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20053-7409/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320 0x109a30320] Decompressors:map[bz2:0x14000803610 gz:0x14000803618 tar:0x14000803570 tar.bz2:0x14000803580 tar.gz:0x14000803590 tar.xz:0x140008035a0 tar.zst:0x140008035f0 tbz2:0x14000803580 tgz:0x14000803590 txz:0x140008035a0 tzst:0x140008035f0 xz:0x14000803630 zip:0x14000803660 zst:0x14000803638] Getters:map[file:0x140017a8560 http:0x14000864190 https:0x140008641e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1205 11:27:35.822974    7923 out_reason.go:110] 
	W1205 11:27:35.829910    7923 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:27:35.833885    7923 out.go:193] 
	
	
	* The control-plane node download-only-019000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-019000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-019000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (8.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-727000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-727000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (8.386268167s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (8.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1205 11:27:44.633255    7922 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1205 11:27:44.633323    7922 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-727000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-727000: exit status 85 (84.995208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | -p download-only-019000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| delete  | -p download-only-019000        | download-only-019000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST | 05 Dec 24 11:27 PST |
	| start   | -o=json --download-only        | download-only-727000 | jenkins | v1.34.0 | 05 Dec 24 11:27 PST |                     |
	|         | -p download-only-727000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 11:27:36
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 11:27:36.278551    7950 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:27:36.278705    7950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:27:36.278709    7950 out.go:358] Setting ErrFile to fd 2...
	I1205 11:27:36.278711    7950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:27:36.278847    7950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:27:36.279975    7950 out.go:352] Setting JSON to true
	I1205 11:27:36.297614    7950 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5225,"bootTime":1733421631,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:27:36.297686    7950 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:27:36.302796    7950 out.go:97] [download-only-727000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:27:36.302928    7950 notify.go:220] Checking for updates...
	I1205 11:27:36.306751    7950 out.go:169] MINIKUBE_LOCATION=20053
	I1205 11:27:36.309870    7950 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:27:36.313786    7950 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:27:36.316785    7950 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:27:36.319834    7950 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	W1205 11:27:36.325769    7950 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 11:27:36.325979    7950 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:27:36.328780    7950 out.go:97] Using the qemu2 driver based on user configuration
	I1205 11:27:36.328790    7950 start.go:297] selected driver: qemu2
	I1205 11:27:36.328795    7950 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:27:36.328857    7950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:27:36.331756    7950 out.go:169] Automatically selected the socket_vmnet network
	I1205 11:27:36.337161    7950 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1205 11:27:36.337268    7950 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:27:36.337290    7950 cni.go:84] Creating CNI manager for ""
	I1205 11:27:36.337313    7950 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:27:36.337320    7950 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:27:36.337364    7950 start.go:340] cluster config:
	{Name:download-only-727000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:27:36.341820    7950 iso.go:125] acquiring lock: {Name:mkf880d2c9d9f685424a46e927591b90c8a9fe85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:27:36.344805    7950 out.go:97] Starting "download-only-727000" primary control-plane node in "download-only-727000" cluster
	I1205 11:27:36.344814    7950 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:27:36.413037    7950 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:27:36.413052    7950 cache.go:56] Caching tarball of preloaded images
	I1205 11:27:36.413988    7950 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:27:36.417270    7950 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1205 11:27:36.417278    7950 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1205 11:27:36.500603    7950 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/20053-7409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-727000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-727000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-727000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 11:27:45.169877    7922 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-263000 --alsologtostderr --binary-mirror http://127.0.0.1:56275 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-263000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-263000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-656000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-656000: exit status 85 (66.253542ms)

                                                
                                                
-- stdout --
	* Profile "addons-656000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-656000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-656000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-656000: exit status 85 (62.412667ms)

                                                
                                                
-- stdout --
	* Profile "addons-656000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-656000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.96s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1205 11:49:37.870120    7922 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 11:49:37.870281    7922 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- PASS: TestHyperKitDriverInstallOrUpdate (10.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status: exit status 7 (35.23825ms)

                                                
                                                
-- stdout --
	nospam-444000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status: exit status 7 (33.812ms)

                                                
                                                
-- stdout --
	nospam-444000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status: exit status 7 (34.448875ms)

                                                
                                                
-- stdout --
	nospam-444000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause: exit status 83 (45.626292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-444000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause: exit status 83 (43.804333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-444000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause: exit status 83 (44.867833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-444000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause: exit status 83 (44.682625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-444000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause: exit status 83 (42.930917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-444000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause: exit status 83 (44.850459ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-444000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (10.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 stop: (3.781478834s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 stop: (3.243762792s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-444000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-444000 stop: (3.816719416s)
--- PASS: TestErrorSpam/stop (10.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20053-7409/.minikube/files/etc/test/nested/copy/7922/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local4028984700/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cache add minikube-local-cache-test:functional-234000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 cache delete minikube-local-cache-test:functional-234000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-234000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 config get cpus: exit status 14 (35.259208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 config get cpus: exit status 14 (41.120667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-234000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-234000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (122.63575ms)

                                                
                                                
-- stdout --
	* [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:29:19.406014    8413 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:29:19.406189    8413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:19.406192    8413 out.go:358] Setting ErrFile to fd 2...
	I1205 11:29:19.406195    8413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:19.406530    8413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:29:19.407842    8413 out.go:352] Setting JSON to false
	I1205 11:29:19.425646    8413 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5328,"bootTime":1733421631,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:29:19.425718    8413 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:29:19.430411    8413 out.go:177] * [functional-234000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:29:19.437585    8413 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:29:19.437650    8413 notify.go:220] Checking for updates...
	I1205 11:29:19.444481    8413 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:29:19.447568    8413 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:29:19.450524    8413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:29:19.453530    8413 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:29:19.456567    8413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:29:19.459882    8413 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:29:19.460156    8413 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:29:19.464488    8413 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:29:19.471546    8413 start.go:297] selected driver: qemu2
	I1205 11:29:19.471553    8413 start.go:901] validating driver "qemu2" against &{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:29:19.471606    8413 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:29:19.478461    8413 out.go:201] 
	W1205 11:29:19.482568    8413 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 11:29:19.486413    8413 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-234000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-234000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-234000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.074083ms)

                                                
                                                
-- stdout --
	* [functional-234000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:29:19.285556    8409 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:29:19.285705    8409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:19.285708    8409 out.go:358] Setting ErrFile to fd 2...
	I1205 11:29:19.285710    8409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:29:19.285842    8409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20053-7409/.minikube/bin
	I1205 11:29:19.287343    8409 out.go:352] Setting JSON to false
	I1205 11:29:19.305719    8409 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5328,"bootTime":1733421631,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1205 11:29:19.305800    8409 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:29:19.309656    8409 out.go:177] * [functional-234000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1205 11:29:19.316565    8409 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 11:29:19.316623    8409 notify.go:220] Checking for updates...
	I1205 11:29:19.323521    8409 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	I1205 11:29:19.326469    8409 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:29:19.329526    8409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:29:19.332587    8409 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	I1205 11:29:19.335531    8409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:29:19.338884    8409 config.go:182] Loaded profile config "functional-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:29:19.339159    8409 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:29:19.343563    8409 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1205 11:29:19.350524    8409 start.go:297] selected driver: qemu2
	I1205 11:29:19.350532    8409 start.go:901] validating driver "qemu2" against &{Name:functional-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:29:19.350592    8409 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:29:19.356533    8409 out.go:201] 
	W1205 11:29:19.360512    8409 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 11:29:19.364516    8409 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 addons list
I1205 11:28:44.341389    7922 retry.go:31] will retry after 4.249931478s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
I1205 11:29:19.904280    7922 retry.go:31] will retry after 15.414876751s: Temporary Error: Get "http:": http: no Host in request URL
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "51.757583ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "38.819417ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "50.856167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.90125ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.901754625s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-234000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image rm kicbase/echo-server:functional-234000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-234000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 image save --daemon kicbase/echo-server:functional-234000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-234000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.014494541s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-234000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-234000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-234000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-234000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.61s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-849000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-849000 --output=json --user=testUser: (2.614182125s)
--- PASS: TestJSONOutput/stop/Command (2.61s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-489000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-489000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.087084ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4f18c2c1-7878-4c41-b8c9-d938ade0a655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-489000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2fb36e0a-d7b1-43ae-b588-c273a76c2797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20053"}}
	{"specversion":"1.0","id":"717b4022-7d87-490d-a7ee-0c9dea3f2f92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig"}}
	{"specversion":"1.0","id":"24a994ad-f243-42aa-a94e-01b2fdf37f09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"56f65ec5-ae3e-49ce-8973-6f63fcd89459","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d2cd659-cfc7-488e-9270-9c364f0c93ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube"}}
	{"specversion":"1.0","id":"7370818b-c294-4338-abd0-d10a06154ce9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f01a1490-f0a4-494a-a63e-5b395f8f347d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-489000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-489000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-050000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-344000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-344000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.760958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-344000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20053
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20053-7409/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-344000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-344000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (54.6585ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-344000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-344000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-344000
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20053
- KUBECONFIG=/Users/jenkins/minikube-integration/20053-7409/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2341893374/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-344000: (3.326730583s)
--- PASS: TestNoKubernetes/serial/Stop (3.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-344000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-344000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (51.421416ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-344000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-344000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-547000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-547000 --alsologtostderr -v=3: (3.493039s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-547000 -n old-k8s-version-547000: exit status 7 (63.480333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-547000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-911000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-911000 --alsologtostderr -v=3: (1.86914425s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-911000 -n no-preload-911000: exit status 7 (61.450042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-911000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-541000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-541000 --alsologtostderr -v=3: (3.257058958s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-541000 -n embed-certs-541000: exit status 7 (61.534625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-541000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-675000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-675000 --alsologtostderr -v=3: (3.370140292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-535000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-535000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-535000 --alsologtostderr -v=3: (3.07746925s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-675000 -n default-k8s-diff-port-675000: exit status 7 (57.797792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-675000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-535000 -n newest-cni-535000: exit status 7 (60.709792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-535000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2322962682/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733426925226062000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2322962682/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733426925226062000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2322962682/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733426925226062000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2322962682/001/test-1733426925226062000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.109792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:45.287670    7922 retry.go:31] will retry after 614.006055ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.404875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:45.996549    7922 retry.go:31] will retry after 895.117113ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.837209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:46.984913    7922 retry.go:31] will retry after 567.292668ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.959875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:47.645579    7922 retry.go:31] will retry after 1.677625569s: exit status 83
I1205 11:28:48.593617    7922 retry.go:31] will retry after 5.013557063s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.760334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:49.415967    7922 retry.go:31] will retry after 2.194470448s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.574875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:51.701326    7922 retry.go:31] will retry after 4.967758398s: exit status 83
I1205 11:28:53.610237    7922 retry.go:31] will retry after 3.928988002s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.660959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo umount -f /mount-9p": exit status 83 (49.904666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-234000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2322962682/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2192300966/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (67.8225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:57.001358    7922 retry.go:31] will retry after 585.648964ms: exit status 83
I1205 11:28:57.541607    7922 retry.go:31] will retry after 5.320800306s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.920166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:57.681346    7922 retry.go:31] will retry after 721.566947ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.489042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:28:58.495811    7922 retry.go:31] will retry after 1.620798703s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.000458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:00.207914    7922 retry.go:31] will retry after 1.834923452s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.309875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:02.134599    7922 retry.go:31] will retry after 1.570026251s: exit status 83
I1205 11:29:02.864673    7922 retry.go:31] will retry after 17.037462033s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.172083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:03.796451    7922 retry.go:31] will retry after 5.366678792s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.669042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "sudo umount -f /mount-9p": exit status 83 (48.776541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-234000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2192300966/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2056582708/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2056582708/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2056582708/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1: exit status 83 (82.413042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:09.509936    7922 retry.go:31] will retry after 275.767157ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1: exit status 83 (89.456458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:09.877466    7922 retry.go:31] will retry after 729.161029ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1: exit status 83 (90.141833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:10.699216    7922 retry.go:31] will retry after 1.385129196s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1: exit status 83 (90.64375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:12.177293    7922 retry.go:31] will retry after 2.439115304s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1: exit status 83 (91.798959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:14.710626    7922 retry.go:31] will retry after 1.464035203s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1: exit status 83 (93.735958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
I1205 11:29:16.270925    7922 retry.go:31] will retry after 2.326472121s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-234000 ssh "findmnt -T" /mount1: exit status 83 (90.682958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-234000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-234000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2056582708/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2056582708/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-234000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2056582708/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (9.65s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-907000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-907000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-907000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-907000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-907000"

                                                
                                                
----------------------- debugLogs end: cilium-907000 [took: 2.351057042s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-907000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-907000
--- SKIP: TestNetworkPlugins/group/cilium (2.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-520000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard