Test Report: QEMU_macOS 20107

                    
                      8d7d309004e1c5aed2c11e9a2f72e102a81e4e45:2024-12-16:37505
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.32
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.06
27 TestAddons/Setup 10.15
28 TestCertOptions 10.27
29 TestCertExpiration 198.5
30 TestDockerFlags 12.55
31 TestForceSystemdFlag 10.21
32 TestForceSystemdEnv 10.1
38 TestErrorSpam/setup 9.86
47 TestFunctional/serial/StartWithProxy 9.92
49 TestFunctional/serial/SoftStart 5.28
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
61 TestFunctional/serial/MinikubeKubectlCmd 0.75
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.3
63 TestFunctional/serial/ExtraConfig 5.28
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.14
82 TestFunctional/parallel/CpCmd 0.3
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.32
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 98.19
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 10.01
142 TestMultiControlPlane/serial/DeployApp 80.08
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.13
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 46.99
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.05
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 2.12
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 10.02
165 TestJSONOutput/start/Command 9.84
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.22
197 TestMountStart/serial/StartWithMountFirst 9.99
200 TestMultiNode/serial/FreshStart2Nodes 10.01
201 TestMultiNode/serial/DeployApp2Nodes 79.23
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.09
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 49.17
209 TestMultiNode/serial/RestartKeepsNodes 9.16
210 TestMultiNode/serial/DeleteNode 0.12
211 TestMultiNode/serial/StopMultiNode 3.5
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.45
217 TestPreload 10.24
219 TestScheduledStopUnix 10.08
220 TestSkaffold 12.27
223 TestRunningBinaryUpgrade 629.09
225 TestKubernetesUpgrade 19.2
239 TestStoppedBinaryUpgrade/Upgrade 585.6
249 TestPause/serial/Start 10.06
252 TestNoKubernetes/serial/StartWithK8s 10.19
253 TestNoKubernetes/serial/StartWithStopK8s 7.69
254 TestNoKubernetes/serial/Start 7.47
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.88
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.37
260 TestNoKubernetes/serial/StartNoArgs 5.37
262 TestNetworkPlugins/group/auto/Start 9.87
263 TestNetworkPlugins/group/kindnet/Start 10.08
264 TestNetworkPlugins/group/calico/Start 10.19
265 TestNetworkPlugins/group/custom-flannel/Start 9.86
266 TestNetworkPlugins/group/false/Start 9.9
267 TestNetworkPlugins/group/enable-default-cni/Start 9.93
268 TestNetworkPlugins/group/flannel/Start 10.03
269 TestNetworkPlugins/group/bridge/Start 10.05
270 TestNetworkPlugins/group/kubenet/Start 9.91
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.28
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 9.95
284 TestStartStop/group/no-preload/serial/DeployApp 0.1
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.26
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
292 TestStartStop/group/no-preload/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 10.04
295 TestStartStop/group/embed-certs/serial/DeployApp 0.1
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
299 TestStartStop/group/embed-certs/serial/SecondStart 5.27
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.03
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
305 TestStartStop/group/embed-certs/serial/Pause 0.11
307 TestStartStop/group/newest-cni/serial/FirstStart 10.14
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.09
317 TestStartStop/group/newest-cni/serial/SecondStart 5.27
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (21.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-259000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-259000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (21.31466975s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6a4d0322-3162-488e-9a90-d9c3f5b42b7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-259000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f473c75-4405-4205-8f51-97b787edefe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20107"}}
	{"specversion":"1.0","id":"06c570dd-ba2f-43c4-b5e1-94c249883d4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig"}}
	{"specversion":"1.0","id":"96691a5e-ee65-47b0-a3c3-dfa07ef7acb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7e203ed5-299b-4f49-91f5-0269d5cb435d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"31aeae3a-7c13-406d-9cef-d39ad106ce27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube"}}
	{"specversion":"1.0","id":"cd898f0f-a3d7-488a-b05f-1ae6d953ac5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"66586102-aa54-4e15-a09d-48cc90de62ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3cc4119b-1eb8-4b05-ba13-1b90daf8aea7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"617fdc43-985f-4e56-a239-0e981298e353","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"762be84b-dd4b-4eb1-be04-ab2af95d82d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-259000\" primary control-plane node in \"download-only-259000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"af38402a-f96b-4a44-94d2-9b2590a5fa23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4927bbc1-ccf3-471b-80e6-320068576e88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109650600 0x109650600 0x109650600 0x109650600 0x109650600 0x109650600 0x109650600] Decompressors:map[bz2:0x14000717250 gz:0x14000717258 tar:0x14000717200 tar.bz2:0x14000717210 tar.gz:0x14000717220 tar.xz:0x14000717230 tar.zst:0x14000717240 tbz2:0x14000717210 tgz:0x14
000717220 txz:0x14000717230 tzst:0x14000717240 xz:0x14000717260 zip:0x14000717280 zst:0x14000717268] Getters:map[file:0x140017e6580 http:0x140005f81e0 https:0x140005f8230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"fd3e0eb4-bba6-4cc9-aad0-3cc4d92341e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:23:30.313448    7257 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:23:30.313647    7257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:23:30.313651    7257 out.go:358] Setting ErrFile to fd 2...
	I1216 03:23:30.313653    7257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:23:30.313788    7257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	W1216 03:23:30.313867    7257 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20107-6737/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20107-6737/.minikube/config/config.json: no such file or directory
	I1216 03:23:30.315304    7257 out.go:352] Setting JSON to true
	I1216 03:23:30.333806    7257 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4981,"bootTime":1734343229,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:23:30.333876    7257 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:23:30.339244    7257 out.go:97] [download-only-259000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:23:30.339433    7257 notify.go:220] Checking for updates...
	W1216 03:23:30.339463    7257 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 03:23:30.343029    7257 out.go:169] MINIKUBE_LOCATION=20107
	I1216 03:23:30.346205    7257 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:23:30.351230    7257 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:23:30.354115    7257 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:23:30.358264    7257 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	W1216 03:23:30.364131    7257 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 03:23:30.364326    7257 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:23:30.367141    7257 out.go:97] Using the qemu2 driver based on user configuration
	I1216 03:23:30.367160    7257 start.go:297] selected driver: qemu2
	I1216 03:23:30.367175    7257 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:23:30.367259    7257 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:23:30.370190    7257 out.go:169] Automatically selected the socket_vmnet network
	I1216 03:23:30.375717    7257 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1216 03:23:30.375879    7257 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 03:23:30.375911    7257 cni.go:84] Creating CNI manager for ""
	I1216 03:23:30.375965    7257 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 03:23:30.376027    7257 start.go:340] cluster config:
	{Name:download-only-259000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-259000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:23:30.380842    7257 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:23:30.385183    7257 out.go:97] Downloading VM boot image ...
	I1216 03:23:30.385205    7257 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso
	I1216 03:23:38.102723    7257 out.go:97] Starting "download-only-259000" primary control-plane node in "download-only-259000" cluster
	I1216 03:23:38.102757    7257 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:23:38.158350    7257 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 03:23:38.158375    7257 cache.go:56] Caching tarball of preloaded images
	I1216 03:23:38.158602    7257 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:23:38.162705    7257 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1216 03:23:38.162712    7257 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 03:23:38.243555    7257 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 03:23:50.223524    7257 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 03:23:50.223698    7257 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 03:23:50.919429    7257 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 03:23:50.919631    7257 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/download-only-259000/config.json ...
	I1216 03:23:50.919651    7257 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/download-only-259000/config.json: {Name:mk00e1d48f911675fb7532254ccf0baee4d79f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:23:50.919970    7257 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:23:50.920220    7257 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1216 03:23:51.542176    7257 out.go:193] 
	W1216 03:23:51.549349    7257 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109650600 0x109650600 0x109650600 0x109650600 0x109650600 0x109650600 0x109650600] Decompressors:map[bz2:0x14000717250 gz:0x14000717258 tar:0x14000717200 tar.bz2:0x14000717210 tar.gz:0x14000717220 tar.xz:0x14000717230 tar.zst:0x14000717240 tbz2:0x14000717210 tgz:0x14000717220 txz:0x14000717230 tzst:0x14000717240 xz:0x14000717260 zip:0x14000717280 zst:0x14000717268] Getters:map[file:0x140017e6580 http:0x140005f81e0 https:0x140005f8230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1216 03:23:51.549375    7257 out_reason.go:110] 
	W1216 03:23:51.557171    7257 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:23:51.561205    7257 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-259000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (21.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.06s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-803000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-803000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.902358791s)

                                                
                                                
-- stdout --
	* [offline-docker-803000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-803000" primary control-plane node in "offline-docker-803000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-803000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:34:23.760712    9164 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:34:23.760869    9164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:34:23.760872    9164 out.go:358] Setting ErrFile to fd 2...
	I1216 03:34:23.760874    9164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:34:23.761035    9164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:34:23.762398    9164 out.go:352] Setting JSON to false
	I1216 03:34:23.781846    9164 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5634,"bootTime":1734343229,"procs":573,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:34:23.781957    9164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:34:23.787560    9164 out.go:177] * [offline-docker-803000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:34:23.795580    9164 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:34:23.795629    9164 notify.go:220] Checking for updates...
	I1216 03:34:23.804515    9164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:34:23.807537    9164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:34:23.815543    9164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:34:23.819590    9164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:34:23.822575    9164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:34:23.825926    9164 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:34:23.825974    9164 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:34:23.829484    9164 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:34:23.836526    9164 start.go:297] selected driver: qemu2
	I1216 03:34:23.836535    9164 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:34:23.836542    9164 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:34:23.838843    9164 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:34:23.841538    9164 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:34:23.844646    9164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:34:23.844665    9164 cni.go:84] Creating CNI manager for ""
	I1216 03:34:23.844692    9164 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:34:23.844701    9164 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:34:23.844738    9164 start.go:340] cluster config:
	{Name:offline-docker-803000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:34:23.849237    9164 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:34:23.857551    9164 out.go:177] * Starting "offline-docker-803000" primary control-plane node in "offline-docker-803000" cluster
	I1216 03:34:23.861505    9164 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:34:23.861528    9164 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:34:23.861545    9164 cache.go:56] Caching tarball of preloaded images
	I1216 03:34:23.861646    9164 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:34:23.861653    9164 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:34:23.861712    9164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/offline-docker-803000/config.json ...
	I1216 03:34:23.861722    9164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/offline-docker-803000/config.json: {Name:mk5cc7b1d47e56de50456994e983050227861a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:34:23.862086    9164 start.go:360] acquireMachinesLock for offline-docker-803000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:34:23.862137    9164 start.go:364] duration metric: took 44.083µs to acquireMachinesLock for "offline-docker-803000"
	I1216 03:34:23.862150    9164 start.go:93] Provisioning new machine with config: &{Name:offline-docker-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:34:23.862186    9164 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:34:23.870569    9164 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:34:23.885989    9164 start.go:159] libmachine.API.Create for "offline-docker-803000" (driver="qemu2")
	I1216 03:34:23.886017    9164 client.go:168] LocalClient.Create starting
	I1216 03:34:23.886108    9164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:34:23.886147    9164 main.go:141] libmachine: Decoding PEM data...
	I1216 03:34:23.886157    9164 main.go:141] libmachine: Parsing certificate...
	I1216 03:34:23.886200    9164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:34:23.886229    9164 main.go:141] libmachine: Decoding PEM data...
	I1216 03:34:23.886239    9164 main.go:141] libmachine: Parsing certificate...
	I1216 03:34:23.886632    9164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:34:24.050894    9164 main.go:141] libmachine: Creating SSH key...
	I1216 03:34:24.168282    9164 main.go:141] libmachine: Creating Disk image...
	I1216 03:34:24.168292    9164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:34:24.168489    9164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2
	I1216 03:34:24.178603    9164 main.go:141] libmachine: STDOUT: 
	I1216 03:34:24.178631    9164 main.go:141] libmachine: STDERR: 
	I1216 03:34:24.178703    9164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2 +20000M
	I1216 03:34:24.188088    9164 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:34:24.188106    9164 main.go:141] libmachine: STDERR: 
	I1216 03:34:24.188124    9164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2
	I1216 03:34:24.188134    9164 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:34:24.188148    9164 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:34:24.188179    9164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:9b:86:bb:d6:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2
	I1216 03:34:24.190149    9164 main.go:141] libmachine: STDOUT: 
	I1216 03:34:24.190164    9164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:34:24.190187    9164 client.go:171] duration metric: took 304.167333ms to LocalClient.Create
	I1216 03:34:26.190285    9164 start.go:128] duration metric: took 2.328118917s to createHost
	I1216 03:34:26.190311    9164 start.go:83] releasing machines lock for "offline-docker-803000", held for 2.328199875s
	W1216 03:34:26.190323    9164 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:26.196795    9164 out.go:177] * Deleting "offline-docker-803000" in qemu2 ...
	W1216 03:34:26.208548    9164 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:26.208558    9164 start.go:729] Will try again in 5 seconds ...
	I1216 03:34:31.209681    9164 start.go:360] acquireMachinesLock for offline-docker-803000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:34:31.210188    9164 start.go:364] duration metric: took 406.292µs to acquireMachinesLock for "offline-docker-803000"
	I1216 03:34:31.210334    9164 start.go:93] Provisioning new machine with config: &{Name:offline-docker-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:34:31.210603    9164 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:34:31.220173    9164 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:34:31.268850    9164 start.go:159] libmachine.API.Create for "offline-docker-803000" (driver="qemu2")
	I1216 03:34:31.268912    9164 client.go:168] LocalClient.Create starting
	I1216 03:34:31.269051    9164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:34:31.269127    9164 main.go:141] libmachine: Decoding PEM data...
	I1216 03:34:31.269144    9164 main.go:141] libmachine: Parsing certificate...
	I1216 03:34:31.269216    9164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:34:31.269273    9164 main.go:141] libmachine: Decoding PEM data...
	I1216 03:34:31.269284    9164 main.go:141] libmachine: Parsing certificate...
	I1216 03:34:31.271944    9164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:34:31.456539    9164 main.go:141] libmachine: Creating SSH key...
	I1216 03:34:31.553821    9164 main.go:141] libmachine: Creating Disk image...
	I1216 03:34:31.553831    9164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:34:31.554061    9164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2
	I1216 03:34:31.564226    9164 main.go:141] libmachine: STDOUT: 
	I1216 03:34:31.564242    9164 main.go:141] libmachine: STDERR: 
	I1216 03:34:31.564297    9164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2 +20000M
	I1216 03:34:31.572739    9164 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:34:31.572757    9164 main.go:141] libmachine: STDERR: 
	I1216 03:34:31.572774    9164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2
	I1216 03:34:31.572780    9164 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:34:31.572788    9164 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:34:31.572826    9164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:19:aa:79:e9:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/offline-docker-803000/disk.qcow2
	I1216 03:34:31.574565    9164 main.go:141] libmachine: STDOUT: 
	I1216 03:34:31.574579    9164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:34:31.574591    9164 client.go:171] duration metric: took 305.944334ms to LocalClient.Create
	I1216 03:34:33.575088    9164 start.go:128] duration metric: took 2.366451833s to createHost
	I1216 03:34:33.575160    9164 start.go:83] releasing machines lock for "offline-docker-803000", held for 2.366947459s
	W1216 03:34:33.575673    9164 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-803000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-803000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:33.594224    9164 out.go:201] 
	W1216 03:34:33.599363    9164 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:34:33.599431    9164 out.go:270] * 
	* 
	W1216 03:34:33.601994    9164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:34:33.610079    9164 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-803000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-16 03:34:33.626966 -0800 PST m=+663.405321418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-803000 -n offline-docker-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-803000 -n offline-docker-803000: exit status 7 (72.874583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-803000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-803000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-803000
--- FAIL: TestOffline (10.06s)

                                                
                                    
x
+
TestAddons/Setup (10.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-215000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-215000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.147316875s)

                                                
                                                
-- stdout --
	* [addons-215000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-215000" primary control-plane node in "addons-215000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-215000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:24:01.324765    7357 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:24:01.324917    7357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:01.324921    7357 out.go:358] Setting ErrFile to fd 2...
	I1216 03:24:01.324924    7357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:01.325077    7357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:24:01.326292    7357 out.go:352] Setting JSON to false
	I1216 03:24:01.344067    7357 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5012,"bootTime":1734343229,"procs":576,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:24:01.344144    7357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:24:01.349389    7357 out.go:177] * [addons-215000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:24:01.356367    7357 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:24:01.356420    7357 notify.go:220] Checking for updates...
	I1216 03:24:01.364375    7357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:24:01.367397    7357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:24:01.370339    7357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:24:01.373387    7357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:24:01.376351    7357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:24:01.379574    7357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:24:01.383362    7357 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:24:01.390286    7357 start.go:297] selected driver: qemu2
	I1216 03:24:01.390292    7357 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:24:01.390297    7357 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:24:01.392892    7357 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:24:01.396392    7357 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:24:01.399413    7357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:24:01.399442    7357 cni.go:84] Creating CNI manager for ""
	I1216 03:24:01.399464    7357 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:24:01.399475    7357 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:24:01.399501    7357 start.go:340] cluster config:
	{Name:addons-215000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:24:01.404297    7357 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:24:01.412371    7357 out.go:177] * Starting "addons-215000" primary control-plane node in "addons-215000" cluster
	I1216 03:24:01.416319    7357 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:24:01.416337    7357 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:24:01.416348    7357 cache.go:56] Caching tarball of preloaded images
	I1216 03:24:01.416436    7357 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:24:01.416446    7357 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:24:01.416668    7357 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/addons-215000/config.json ...
	I1216 03:24:01.416680    7357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/addons-215000/config.json: {Name:mk11d468baf365f26d5dac4ff5ed88b88c3fa2f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:24:01.417098    7357 start.go:360] acquireMachinesLock for addons-215000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:24:01.417196    7357 start.go:364] duration metric: took 91.458µs to acquireMachinesLock for "addons-215000"
	I1216 03:24:01.417208    7357 start.go:93] Provisioning new machine with config: &{Name:addons-215000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:24:01.417239    7357 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:24:01.425374    7357 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1216 03:24:01.443683    7357 start.go:159] libmachine.API.Create for "addons-215000" (driver="qemu2")
	I1216 03:24:01.443718    7357 client.go:168] LocalClient.Create starting
	I1216 03:24:01.443861    7357 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:24:01.561071    7357 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:24:01.624890    7357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:24:01.820713    7357 main.go:141] libmachine: Creating SSH key...
	I1216 03:24:01.918158    7357 main.go:141] libmachine: Creating Disk image...
	I1216 03:24:01.918166    7357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:24:01.918396    7357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2
	I1216 03:24:01.928024    7357 main.go:141] libmachine: STDOUT: 
	I1216 03:24:01.928044    7357 main.go:141] libmachine: STDERR: 
	I1216 03:24:01.928108    7357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2 +20000M
	I1216 03:24:01.936828    7357 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:24:01.936844    7357 main.go:141] libmachine: STDERR: 
	I1216 03:24:01.936855    7357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2
	I1216 03:24:01.936861    7357 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:24:01.936902    7357 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:24:01.936938    7357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c1:03:bf:c6:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2
	I1216 03:24:01.938789    7357 main.go:141] libmachine: STDOUT: 
	I1216 03:24:01.938805    7357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:24:01.938838    7357 client.go:171] duration metric: took 495.105458ms to LocalClient.Create
	I1216 03:24:03.940991    7357 start.go:128] duration metric: took 2.523764125s to createHost
	I1216 03:24:03.941222    7357 start.go:83] releasing machines lock for "addons-215000", held for 2.523900042s
	W1216 03:24:03.941275    7357 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:24:03.959551    7357 out.go:177] * Deleting "addons-215000" in qemu2 ...
	W1216 03:24:03.993228    7357 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:24:03.993255    7357 start.go:729] Will try again in 5 seconds ...
	I1216 03:24:08.995429    7357 start.go:360] acquireMachinesLock for addons-215000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:24:08.996053    7357 start.go:364] duration metric: took 502.625µs to acquireMachinesLock for "addons-215000"
	I1216 03:24:08.996200    7357 start.go:93] Provisioning new machine with config: &{Name:addons-215000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:24:08.996415    7357 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:24:09.018369    7357 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1216 03:24:09.066616    7357 start.go:159] libmachine.API.Create for "addons-215000" (driver="qemu2")
	I1216 03:24:09.066670    7357 client.go:168] LocalClient.Create starting
	I1216 03:24:09.066803    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:24:09.066881    7357 main.go:141] libmachine: Decoding PEM data...
	I1216 03:24:09.066897    7357 main.go:141] libmachine: Parsing certificate...
	I1216 03:24:09.066975    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:24:09.067033    7357 main.go:141] libmachine: Decoding PEM data...
	I1216 03:24:09.067043    7357 main.go:141] libmachine: Parsing certificate...
	I1216 03:24:09.067626    7357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:24:09.277980    7357 main.go:141] libmachine: Creating SSH key...
	I1216 03:24:09.369322    7357 main.go:141] libmachine: Creating Disk image...
	I1216 03:24:09.369328    7357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:24:09.369542    7357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2
	I1216 03:24:09.379692    7357 main.go:141] libmachine: STDOUT: 
	I1216 03:24:09.379707    7357 main.go:141] libmachine: STDERR: 
	I1216 03:24:09.379765    7357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2 +20000M
	I1216 03:24:09.388386    7357 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:24:09.388400    7357 main.go:141] libmachine: STDERR: 
	I1216 03:24:09.388416    7357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2
	I1216 03:24:09.388420    7357 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:24:09.388432    7357 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:24:09.388463    7357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a8:74:49:d6:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/addons-215000/disk.qcow2
	I1216 03:24:09.390303    7357 main.go:141] libmachine: STDOUT: 
	I1216 03:24:09.390318    7357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:24:09.390330    7357 client.go:171] duration metric: took 323.658167ms to LocalClient.Create
	I1216 03:24:11.392521    7357 start.go:128] duration metric: took 2.39609325s to createHost
	I1216 03:24:11.392602    7357 start.go:83] releasing machines lock for "addons-215000", held for 2.396551625s
	W1216 03:24:11.392946    7357 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:24:11.402419    7357 out.go:201] 
	W1216 03:24:11.412510    7357 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:24:11.412567    7357 out.go:270] * 
	* 
	W1216 03:24:11.415236    7357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:24:11.424467    7357 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-215000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.15s)

                                                
                                    
x
+
TestCertOptions (10.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-583000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-583000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.926667791s)

                                                
                                                
-- stdout --
	* [cert-options-583000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-583000" primary control-plane node in "cert-options-583000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-583000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-583000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-583000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-583000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-583000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.282375ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-583000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-583000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-583000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-583000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-583000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-583000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (45.423333ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-583000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-583000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-583000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-583000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-583000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-16 03:46:11.682719 -0800 PST m=+1361.485820876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-583000 -n cert-options-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-583000 -n cert-options-583000: exit status 7 (34.459375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-583000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-583000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-583000
--- FAIL: TestCertOptions (10.27s)

                                                
                                    
x
+
TestCertExpiration (198.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-219000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-219000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.474985209s)

                                                
                                                
-- stdout --
	* [cert-expiration-219000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-219000" primary control-plane node in "cert-expiration-219000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-219000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-219000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-219000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-219000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-219000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.861041667s)

                                                
                                                
-- stdout --
	* [cert-expiration-219000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-219000" primary control-plane node in "cert-expiration-219000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-219000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-219000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-219000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-219000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-219000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-219000" primary control-plane node in "cert-expiration-219000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-219000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-219000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-219000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-16 03:48:57.23317 -0800 PST m=+1527.039337210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-219000 -n cert-expiration-219000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-219000 -n cert-expiration-219000: exit status 7 (71.46675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-219000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-219000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-219000
--- FAIL: TestCertExpiration (198.50s)

                                                
                                    
x
+
TestDockerFlags (12.55s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-727000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-727000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.286134167s)

                                                
                                                
-- stdout --
	* [docker-flags-727000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-727000" primary control-plane node in "docker-flags-727000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-727000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:45:49.019693    9779 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:45:49.019880    9779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:45:49.019888    9779 out.go:358] Setting ErrFile to fd 2...
	I1216 03:45:49.019891    9779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:45:49.020025    9779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:45:49.021691    9779 out.go:352] Setting JSON to false
	I1216 03:45:49.044138    9779 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6320,"bootTime":1734343229,"procs":572,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:45:49.044220    9779 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:45:49.053650    9779 out.go:177] * [docker-flags-727000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:45:49.063137    9779 notify.go:220] Checking for updates...
	I1216 03:45:49.071447    9779 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:45:49.080598    9779 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:45:49.088588    9779 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:45:49.095520    9779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:45:49.105612    9779 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:45:49.112493    9779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:45:49.117302    9779 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:45:49.117395    9779 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:45:49.117464    9779 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:45:49.120504    9779 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:45:49.129412    9779 start.go:297] selected driver: qemu2
	I1216 03:45:49.129420    9779 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:45:49.129428    9779 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:45:49.132757    9779 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:45:49.139559    9779 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:45:49.144783    9779 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1216 03:45:49.144811    9779 cni.go:84] Creating CNI manager for ""
	I1216 03:45:49.144838    9779 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:45:49.144843    9779 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:45:49.144901    9779 start.go:340] cluster config:
	{Name:docker-flags-727000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:45:49.151055    9779 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:45:49.156600    9779 out.go:177] * Starting "docker-flags-727000" primary control-plane node in "docker-flags-727000" cluster
	I1216 03:45:49.161571    9779 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:45:49.161620    9779 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:45:49.161636    9779 cache.go:56] Caching tarball of preloaded images
	I1216 03:45:49.161778    9779 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:45:49.161785    9779 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:45:49.161860    9779 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/docker-flags-727000/config.json ...
	I1216 03:45:49.161872    9779 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/docker-flags-727000/config.json: {Name:mk8c4a25375a0a4df6436efe81ec7a8f21af9449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:45:49.162138    9779 start.go:360] acquireMachinesLock for docker-flags-727000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:45:51.324374    9779 start.go:364] duration metric: took 2.162172334s to acquireMachinesLock for "docker-flags-727000"
	I1216 03:45:51.324568    9779 start.go:93] Provisioning new machine with config: &{Name:docker-flags-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:45:51.324819    9779 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:45:51.337348    9779 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:45:51.384895    9779 start.go:159] libmachine.API.Create for "docker-flags-727000" (driver="qemu2")
	I1216 03:45:51.384954    9779 client.go:168] LocalClient.Create starting
	I1216 03:45:51.385121    9779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:45:51.385197    9779 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:51.385218    9779 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:51.385294    9779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:45:51.385350    9779 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:51.385365    9779 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:51.386100    9779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:45:51.570377    9779 main.go:141] libmachine: Creating SSH key...
	I1216 03:45:51.757216    9779 main.go:141] libmachine: Creating Disk image...
	I1216 03:45:51.757224    9779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:45:51.757457    9779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2
	I1216 03:45:51.767649    9779 main.go:141] libmachine: STDOUT: 
	I1216 03:45:51.767704    9779 main.go:141] libmachine: STDERR: 
	I1216 03:45:51.767762    9779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2 +20000M
	I1216 03:45:51.776308    9779 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:45:51.776326    9779 main.go:141] libmachine: STDERR: 
	I1216 03:45:51.776340    9779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2
	I1216 03:45:51.776347    9779 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:45:51.776370    9779 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:45:51.776400    9779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0d:80:0c:83:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2
	I1216 03:45:51.778262    9779 main.go:141] libmachine: STDOUT: 
	I1216 03:45:51.778284    9779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:45:51.778303    9779 client.go:171] duration metric: took 393.349416ms to LocalClient.Create
	I1216 03:45:53.780487    9779 start.go:128] duration metric: took 2.455633541s to createHost
	I1216 03:45:53.780563    9779 start.go:83] releasing machines lock for "docker-flags-727000", held for 2.456183709s
	W1216 03:45:53.780611    9779 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:45:53.796082    9779 out.go:177] * Deleting "docker-flags-727000" in qemu2 ...
	W1216 03:45:53.827761    9779 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:45:53.827803    9779 start.go:729] Will try again in 5 seconds ...
	I1216 03:45:58.829982    9779 start.go:360] acquireMachinesLock for docker-flags-727000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:45:58.830492    9779 start.go:364] duration metric: took 408.25µs to acquireMachinesLock for "docker-flags-727000"
	I1216 03:45:58.830626    9779 start.go:93] Provisioning new machine with config: &{Name:docker-flags-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:45:58.830840    9779 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:45:58.839514    9779 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:45:58.888283    9779 start.go:159] libmachine.API.Create for "docker-flags-727000" (driver="qemu2")
	I1216 03:45:58.888336    9779 client.go:168] LocalClient.Create starting
	I1216 03:45:58.888488    9779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:45:58.888568    9779 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:58.888592    9779 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:58.888656    9779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:45:58.888716    9779 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:58.888730    9779 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:58.889508    9779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:45:59.084333    9779 main.go:141] libmachine: Creating SSH key...
	I1216 03:45:59.193270    9779 main.go:141] libmachine: Creating Disk image...
	I1216 03:45:59.193276    9779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:45:59.193520    9779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2
	I1216 03:45:59.203548    9779 main.go:141] libmachine: STDOUT: 
	I1216 03:45:59.203572    9779 main.go:141] libmachine: STDERR: 
	I1216 03:45:59.203645    9779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2 +20000M
	I1216 03:45:59.212548    9779 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:45:59.212564    9779 main.go:141] libmachine: STDERR: 
	I1216 03:45:59.212577    9779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2
	I1216 03:45:59.212583    9779 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:45:59.212593    9779 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:45:59.212630    9779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:7a:7f:de:cb:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/docker-flags-727000/disk.qcow2
	I1216 03:45:59.214522    9779 main.go:141] libmachine: STDOUT: 
	I1216 03:45:59.214539    9779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:45:59.214553    9779 client.go:171] duration metric: took 326.216417ms to LocalClient.Create
	I1216 03:46:01.216686    9779 start.go:128] duration metric: took 2.385859042s to createHost
	I1216 03:46:01.216754    9779 start.go:83] releasing machines lock for "docker-flags-727000", held for 2.386282875s
	W1216 03:46:01.217158    9779 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:01.230929    9779 out.go:201] 
	W1216 03:46:01.240018    9779 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:46:01.240045    9779 out.go:270] * 
	* 
	W1216 03:46:01.242709    9779 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:46:01.254799    9779 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-727000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-727000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-727000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (83.011917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-727000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-727000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-727000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-727000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-727000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-727000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-727000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-727000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-727000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (50.669917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-727000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-727000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-727000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-727000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-727000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-727000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-16 03:46:01.407416 -0800 PST m=+1351.210327001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-727000 -n docker-flags-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-727000 -n docker-flags-727000: exit status 7 (34.182708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-727000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-727000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-727000
--- FAIL: TestDockerFlags (12.55s)

                                                
                                    
x
+
TestForceSystemdFlag (10.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-860000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-860000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.998754083s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-860000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-860000" primary control-plane node in "force-systemd-flag-860000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-860000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:45:14.544641    9623 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:45:14.544840    9623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:45:14.544848    9623 out.go:358] Setting ErrFile to fd 2...
	I1216 03:45:14.544857    9623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:45:14.545021    9623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:45:14.552612    9623 out.go:352] Setting JSON to false
	I1216 03:45:14.571748    9623 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6285,"bootTime":1734343229,"procs":568,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:45:14.571825    9623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:45:14.577443    9623 out.go:177] * [force-systemd-flag-860000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:45:14.584772    9623 notify.go:220] Checking for updates...
	I1216 03:45:14.592610    9623 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:45:14.596553    9623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:45:14.600550    9623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:45:14.604590    9623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:45:14.608515    9623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:45:14.612603    9623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:45:14.616854    9623 config.go:182] Loaded profile config "NoKubernetes-850000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:45:14.616932    9623 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:45:14.616985    9623 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:45:14.624612    9623 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:45:14.628522    9623 start.go:297] selected driver: qemu2
	I1216 03:45:14.628529    9623 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:45:14.628537    9623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:45:14.631138    9623 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:45:14.634627    9623 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:45:14.638526    9623 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 03:45:14.638546    9623 cni.go:84] Creating CNI manager for ""
	I1216 03:45:14.638578    9623 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:45:14.638582    9623 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:45:14.638656    9623 start.go:340] cluster config:
	{Name:force-systemd-flag-860000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:45:14.643339    9623 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:45:14.647611    9623 out.go:177] * Starting "force-systemd-flag-860000" primary control-plane node in "force-systemd-flag-860000" cluster
	I1216 03:45:14.654620    9623 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:45:14.654640    9623 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:45:14.654656    9623 cache.go:56] Caching tarball of preloaded images
	I1216 03:45:14.654748    9623 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:45:14.654754    9623 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:45:14.654853    9623 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/force-systemd-flag-860000/config.json ...
	I1216 03:45:14.654865    9623 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/force-systemd-flag-860000/config.json: {Name:mk016ceeba9625f492246aa2abd2bac53d232dec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:45:14.655298    9623 start.go:360] acquireMachinesLock for force-systemd-flag-860000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:45:14.655362    9623 start.go:364] duration metric: took 54.417µs to acquireMachinesLock for "force-systemd-flag-860000"
	I1216 03:45:14.655376    9623 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:45:14.655402    9623 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:45:14.663479    9623 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:45:14.681243    9623 start.go:159] libmachine.API.Create for "force-systemd-flag-860000" (driver="qemu2")
	I1216 03:45:14.681275    9623 client.go:168] LocalClient.Create starting
	I1216 03:45:14.681351    9623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:45:14.681393    9623 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:14.681409    9623 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:14.681452    9623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:45:14.681483    9623 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:14.681490    9623 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:14.681981    9623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:45:14.853977    9623 main.go:141] libmachine: Creating SSH key...
	I1216 03:45:14.925113    9623 main.go:141] libmachine: Creating Disk image...
	I1216 03:45:14.925122    9623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:45:14.925360    9623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2
	I1216 03:45:14.935058    9623 main.go:141] libmachine: STDOUT: 
	I1216 03:45:14.935073    9623 main.go:141] libmachine: STDERR: 
	I1216 03:45:14.935133    9623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2 +20000M
	I1216 03:45:14.943472    9623 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:45:14.943488    9623 main.go:141] libmachine: STDERR: 
	I1216 03:45:14.943504    9623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2
	I1216 03:45:14.943511    9623 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:45:14.943525    9623 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:45:14.943563    9623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:da:33:d1:48:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2
	I1216 03:45:14.945285    9623 main.go:141] libmachine: STDOUT: 
	I1216 03:45:14.945298    9623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:45:14.945317    9623 client.go:171] duration metric: took 264.039333ms to LocalClient.Create
	I1216 03:45:16.947449    9623 start.go:128] duration metric: took 2.292066625s to createHost
	I1216 03:45:16.947575    9623 start.go:83] releasing machines lock for "force-systemd-flag-860000", held for 2.292193625s
	W1216 03:45:16.947639    9623 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:45:16.967344    9623 out.go:177] * Deleting "force-systemd-flag-860000" in qemu2 ...
	W1216 03:45:17.003830    9623 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:45:17.003855    9623 start.go:729] Will try again in 5 seconds ...
	I1216 03:45:22.006008    9623 start.go:360] acquireMachinesLock for force-systemd-flag-860000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:45:22.022753    9623 start.go:364] duration metric: took 16.650708ms to acquireMachinesLock for "force-systemd-flag-860000"
	I1216 03:45:22.022814    9623 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:45:22.023061    9623 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:45:22.035157    9623 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:45:22.083944    9623 start.go:159] libmachine.API.Create for "force-systemd-flag-860000" (driver="qemu2")
	I1216 03:45:22.084000    9623 client.go:168] LocalClient.Create starting
	I1216 03:45:22.084177    9623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:45:22.084260    9623 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:22.084278    9623 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:22.084348    9623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:45:22.084404    9623 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:22.084420    9623 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:22.085053    9623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:45:22.353770    9623 main.go:141] libmachine: Creating SSH key...
	I1216 03:45:22.429076    9623 main.go:141] libmachine: Creating Disk image...
	I1216 03:45:22.429081    9623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:45:22.429310    9623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2
	I1216 03:45:22.439433    9623 main.go:141] libmachine: STDOUT: 
	I1216 03:45:22.439450    9623 main.go:141] libmachine: STDERR: 
	I1216 03:45:22.439505    9623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2 +20000M
	I1216 03:45:22.447919    9623 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:45:22.447936    9623 main.go:141] libmachine: STDERR: 
	I1216 03:45:22.447957    9623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2
	I1216 03:45:22.447961    9623 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:45:22.447973    9623 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:45:22.448000    9623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:1c:71:18:93:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-flag-860000/disk.qcow2
	I1216 03:45:22.449817    9623 main.go:141] libmachine: STDOUT: 
	I1216 03:45:22.449831    9623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:45:22.449845    9623 client.go:171] duration metric: took 365.844291ms to LocalClient.Create
	I1216 03:45:24.451981    9623 start.go:128] duration metric: took 2.428905417s to createHost
	I1216 03:45:24.452100    9623 start.go:83] releasing machines lock for "force-systemd-flag-860000", held for 2.429318208s
	W1216 03:45:24.452423    9623 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:45:24.466975    9623 out.go:201] 
	W1216 03:45:24.476103    9623 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:45:24.476149    9623 out.go:270] * 
	* 
	W1216 03:45:24.479019    9623 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:45:24.486943    9623 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-860000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-860000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-860000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (88.409834ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-860000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-860000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-16 03:45:24.600617 -0800 PST m=+1314.402846460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-860000 -n force-systemd-flag-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-860000 -n force-systemd-flag-860000: exit status 7 (37.267167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-860000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-860000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-860000
--- FAIL: TestForceSystemdFlag (10.21s)

                                                
                                    
x
+
TestForceSystemdEnv (10.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-009000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1216 03:45:38.883625    7256 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-009000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.885776583s)

                                                
                                                
-- stdout --
	* [force-systemd-env-009000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-009000" primary control-plane node in "force-systemd-env-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:45:38.911625    9736 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:45:38.911784    9736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:45:38.911789    9736 out.go:358] Setting ErrFile to fd 2...
	I1216 03:45:38.911792    9736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:45:38.911922    9736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:45:38.913576    9736 out.go:352] Setting JSON to false
	I1216 03:45:38.933861    9736 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6309,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:45:38.933934    9736 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:45:38.940353    9736 out.go:177] * [force-systemd-env-009000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:45:38.947207    9736 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:45:38.947238    9736 notify.go:220] Checking for updates...
	I1216 03:45:38.962185    9736 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:45:38.969183    9736 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:45:38.976166    9736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:45:38.987172    9736 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:45:38.996129    9736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1216 03:45:38.999584    9736 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:45:38.999633    9736 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:45:39.006089    9736 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:45:39.012170    9736 start.go:297] selected driver: qemu2
	I1216 03:45:39.012176    9736 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:45:39.012181    9736 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:45:39.014816    9736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:45:39.024141    9736 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:45:39.031177    9736 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 03:45:39.031190    9736 cni.go:84] Creating CNI manager for ""
	I1216 03:45:39.031231    9736 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:45:39.031238    9736 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:45:39.031289    9736 start.go:340] cluster config:
	{Name:force-systemd-env-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:45:39.036202    9736 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:45:39.043077    9736 out.go:177] * Starting "force-systemd-env-009000" primary control-plane node in "force-systemd-env-009000" cluster
	I1216 03:45:39.051161    9736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:45:39.051191    9736 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:45:39.051208    9736 cache.go:56] Caching tarball of preloaded images
	I1216 03:45:39.051316    9736 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:45:39.051323    9736 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:45:39.051405    9736 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/force-systemd-env-009000/config.json ...
	I1216 03:45:39.051418    9736 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/force-systemd-env-009000/config.json: {Name:mkc4c140fee32ec25f92c3dbabe5e999ba09a717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:45:39.055895    9736 start.go:360] acquireMachinesLock for force-systemd-env-009000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:45:39.055960    9736 start.go:364] duration metric: took 55.875µs to acquireMachinesLock for "force-systemd-env-009000"
	I1216 03:45:39.055975    9736 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:45:39.056010    9736 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:45:39.063208    9736 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:45:39.082311    9736 start.go:159] libmachine.API.Create for "force-systemd-env-009000" (driver="qemu2")
	I1216 03:45:39.082336    9736 client.go:168] LocalClient.Create starting
	I1216 03:45:39.082410    9736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:45:39.082451    9736 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:39.082467    9736 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:39.082504    9736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:45:39.082536    9736 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:39.082546    9736 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:39.082924    9736 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:45:39.244624    9736 main.go:141] libmachine: Creating SSH key...
	I1216 03:45:39.320444    9736 main.go:141] libmachine: Creating Disk image...
	I1216 03:45:39.320449    9736 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:45:39.320681    9736 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2
	I1216 03:45:39.330501    9736 main.go:141] libmachine: STDOUT: 
	I1216 03:45:39.330523    9736 main.go:141] libmachine: STDERR: 
	I1216 03:45:39.330584    9736 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2 +20000M
	I1216 03:45:39.339058    9736 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:45:39.339078    9736 main.go:141] libmachine: STDERR: 
	I1216 03:45:39.339094    9736 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2
	I1216 03:45:39.339099    9736 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:45:39.339109    9736 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:45:39.339139    9736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:1a:e2:13:62:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2
	I1216 03:45:39.340922    9736 main.go:141] libmachine: STDOUT: 
	I1216 03:45:39.340938    9736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:45:39.340961    9736 client.go:171] duration metric: took 258.623334ms to LocalClient.Create
	I1216 03:45:41.343144    9736 start.go:128] duration metric: took 2.2871375s to createHost
	I1216 03:45:41.343208    9736 start.go:83] releasing machines lock for "force-systemd-env-009000", held for 2.287280167s
	W1216 03:45:41.343273    9736 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:45:41.373428    9736 out.go:177] * Deleting "force-systemd-env-009000" in qemu2 ...
	W1216 03:45:41.398876    9736 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:45:41.398895    9736 start.go:729] Will try again in 5 seconds ...
	I1216 03:45:46.401206    9736 start.go:360] acquireMachinesLock for force-systemd-env-009000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:45:46.401800    9736 start.go:364] duration metric: took 487.042µs to acquireMachinesLock for "force-systemd-env-009000"
	I1216 03:45:46.401952    9736 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:45:46.402170    9736 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:45:46.423954    9736 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:45:46.472889    9736 start.go:159] libmachine.API.Create for "force-systemd-env-009000" (driver="qemu2")
	I1216 03:45:46.472944    9736 client.go:168] LocalClient.Create starting
	I1216 03:45:46.473081    9736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:45:46.473153    9736 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:46.473167    9736 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:46.473228    9736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:45:46.473288    9736 main.go:141] libmachine: Decoding PEM data...
	I1216 03:45:46.473298    9736 main.go:141] libmachine: Parsing certificate...
	I1216 03:45:46.473995    9736 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:45:46.646331    9736 main.go:141] libmachine: Creating SSH key...
	I1216 03:45:46.692274    9736 main.go:141] libmachine: Creating Disk image...
	I1216 03:45:46.692280    9736 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:45:46.692499    9736 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2
	I1216 03:45:46.702416    9736 main.go:141] libmachine: STDOUT: 
	I1216 03:45:46.702437    9736 main.go:141] libmachine: STDERR: 
	I1216 03:45:46.702499    9736 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2 +20000M
	I1216 03:45:46.711045    9736 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:45:46.711067    9736 main.go:141] libmachine: STDERR: 
	I1216 03:45:46.711079    9736 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2
	I1216 03:45:46.711085    9736 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:45:46.711092    9736 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:45:46.711135    9736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:27:df:91:b8:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/force-systemd-env-009000/disk.qcow2
	I1216 03:45:46.712975    9736 main.go:141] libmachine: STDOUT: 
	I1216 03:45:46.712999    9736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:45:46.713012    9736 client.go:171] duration metric: took 240.067333ms to LocalClient.Create
	I1216 03:45:48.715206    9736 start.go:128] duration metric: took 2.313010958s to createHost
	I1216 03:45:48.715279    9736 start.go:83] releasing machines lock for "force-systemd-env-009000", held for 2.313498291s
	W1216 03:45:48.715729    9736 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:45:48.731352    9736 out.go:201] 
	W1216 03:45:48.736584    9736 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:45:48.736608    9736 out.go:270] * 
	* 
	W1216 03:45:48.739147    9736 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:45:48.749330    9736 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-009000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-009000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-009000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (83.484ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-009000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-009000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-009000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-16 03:45:48.850639 -0800 PST m=+1338.653317418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-009000 -n force-systemd-env-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-009000 -n force-systemd-env-009000: exit status 7 (36.811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-009000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-009000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-009000
--- FAIL: TestForceSystemdEnv (10.10s)

                                                
                                    
x
+
TestErrorSpam/setup (9.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-451000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-451000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 --driver=qemu2 : exit status 80 (9.8583575s)

                                                
                                                
-- stdout --
	* [nospam-451000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-451000" primary control-plane node in "nospam-451000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-451000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-451000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-451000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-451000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-451000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20107
- KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-451000" primary control-plane node in "nospam-451000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-451000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-451000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.86s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-648000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-648000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.845293208s)

                                                
                                                
-- stdout --
	* [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-648000" primary control-plane node in "functional-648000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-648000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:60832 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:60832 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:60832 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-648000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20107
- KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-648000" primary control-plane node in "functional-648000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-648000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:60832 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:60832 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:60832 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (74.512208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.92s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 03:24:42.310246    7256 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-648000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-648000 --alsologtostderr -v=8: exit status 80 (5.20202325s)

                                                
                                                
-- stdout --
	* [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-648000" primary control-plane node in "functional-648000" cluster
	* Restarting existing qemu2 VM for "functional-648000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-648000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:24:42.344768    7520 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:24:42.344943    7520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:42.344946    7520 out.go:358] Setting ErrFile to fd 2...
	I1216 03:24:42.344949    7520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:42.345073    7520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:24:42.346143    7520 out.go:352] Setting JSON to false
	I1216 03:24:42.364241    7520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5053,"bootTime":1734343229,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:24:42.364316    7520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:24:42.369669    7520 out.go:177] * [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:24:42.378644    7520 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:24:42.378694    7520 notify.go:220] Checking for updates...
	I1216 03:24:42.386598    7520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:24:42.390619    7520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:24:42.394462    7520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:24:42.397664    7520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:24:42.400651    7520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:24:42.403867    7520 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:24:42.403914    7520 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:24:42.408590    7520 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:24:42.415592    7520 start.go:297] selected driver: qemu2
	I1216 03:24:42.415598    7520 start.go:901] validating driver "qemu2" against &{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:24:42.415645    7520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:24:42.418215    7520 cni.go:84] Creating CNI manager for ""
	I1216 03:24:42.418249    7520 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:24:42.418297    7520 start.go:340] cluster config:
	{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:24:42.422968    7520 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:24:42.431654    7520 out.go:177] * Starting "functional-648000" primary control-plane node in "functional-648000" cluster
	I1216 03:24:42.435612    7520 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:24:42.435626    7520 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:24:42.435633    7520 cache.go:56] Caching tarball of preloaded images
	I1216 03:24:42.435704    7520 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:24:42.435710    7520 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:24:42.435754    7520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/functional-648000/config.json ...
	I1216 03:24:42.436248    7520 start.go:360] acquireMachinesLock for functional-648000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:24:42.436279    7520 start.go:364] duration metric: took 24.584µs to acquireMachinesLock for "functional-648000"
	I1216 03:24:42.436287    7520 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:24:42.436293    7520 fix.go:54] fixHost starting: 
	I1216 03:24:42.436419    7520 fix.go:112] recreateIfNeeded on functional-648000: state=Stopped err=<nil>
	W1216 03:24:42.436428    7520 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:24:42.444547    7520 out.go:177] * Restarting existing qemu2 VM for "functional-648000" ...
	I1216 03:24:42.448594    7520 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:24:42.448633    7520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:36:db:52:2e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/disk.qcow2
	I1216 03:24:42.450992    7520 main.go:141] libmachine: STDOUT: 
	I1216 03:24:42.451011    7520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:24:42.451039    7520 fix.go:56] duration metric: took 14.745584ms for fixHost
	I1216 03:24:42.451045    7520 start.go:83] releasing machines lock for "functional-648000", held for 14.761958ms
	W1216 03:24:42.451051    7520 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:24:42.451092    7520 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:24:42.451097    7520 start.go:729] Will try again in 5 seconds ...
	I1216 03:24:47.453243    7520 start.go:360] acquireMachinesLock for functional-648000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:24:47.453664    7520 start.go:364] duration metric: took 321.375µs to acquireMachinesLock for "functional-648000"
	I1216 03:24:47.453807    7520 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:24:47.453827    7520 fix.go:54] fixHost starting: 
	I1216 03:24:47.454539    7520 fix.go:112] recreateIfNeeded on functional-648000: state=Stopped err=<nil>
	W1216 03:24:47.454569    7520 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:24:47.461976    7520 out.go:177] * Restarting existing qemu2 VM for "functional-648000" ...
	I1216 03:24:47.466116    7520 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:24:47.466376    7520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:36:db:52:2e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/disk.qcow2
	I1216 03:24:47.476532    7520 main.go:141] libmachine: STDOUT: 
	I1216 03:24:47.476585    7520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:24:47.476701    7520 fix.go:56] duration metric: took 22.876ms for fixHost
	I1216 03:24:47.476724    7520 start.go:83] releasing machines lock for "functional-648000", held for 23.035583ms
	W1216 03:24:47.476894    7520 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:24:47.485004    7520 out.go:201] 
	W1216 03:24:47.489062    7520 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:24:47.489106    7520 out.go:270] * 
	* 
	W1216 03:24:47.491503    7520 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:24:47.500047    7520 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-648000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.20368325s for "functional-648000" cluster.
I1216 03:24:47.514283    7256 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (75.333791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.83025ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-648000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (34.260333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-648000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-648000 get po -A: exit status 1 (26.8455ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-648000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-648000\n"*: args "kubectl --context functional-648000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-648000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (34.942875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh sudo crictl images: exit status 83 (43.959875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-648000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.554583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-648000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.925375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (46.845125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-648000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 kubectl -- --context functional-648000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 kubectl -- --context functional-648000 get pods: exit status 1 (711.478541ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-648000
	* no server found for cluster "functional-648000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-648000 kubectl -- --context functional-648000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (36.339125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-648000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-648000 get pods: exit status 1 (1.165973541s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-648000
	* no server found for cluster "functional-648000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-648000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (1.128307916s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.30s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-648000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-648000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.208366834s)

                                                
                                                
-- stdout --
	* [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-648000" primary control-plane node in "functional-648000" cluster
	* Restarting existing qemu2 VM for "functional-648000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-648000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-648000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.209254042s for "functional-648000" cluster.
I1216 03:24:59.433193    7256 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (73.852875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-648000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-648000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.761292ms)

                                                
                                                
** stderr ** 
	error: context "functional-648000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-648000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (35.188959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 logs: exit status 83 (80.559792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |                     |
	|         | -p download-only-259000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:23 PST | 16 Dec 24 03:23 PST |
	| delete  | -p download-only-259000                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST | 16 Dec 24 03:23 PST |
	| start   | -o=json --download-only                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |                     |
	|         | -p download-only-503000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| delete  | -p download-only-503000                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| delete  | -p download-only-259000                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| delete  | -p download-only-503000                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| start   | --download-only -p                                                       | binary-mirror-381000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | binary-mirror-381000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:60797                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-381000                                                  | binary-mirror-381000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| addons  | enable dashboard -p                                                      | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | addons-215000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | addons-215000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-215000 --wait=true                                             | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-215000                                                         | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| start   | -p nospam-451000 -n=1 --memory=2250 --wait=false                         | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-451000                                                         | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | minikube-local-cache-test:functional-648000                              |                      |         |         |                     |                     |
	| cache   | functional-648000 cache delete                                           | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | minikube-local-cache-test:functional-648000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| ssh     | functional-648000 ssh sudo                                               | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-648000                                                        | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-648000 ssh                                                    | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-648000 cache reload                                           | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	| ssh     | functional-648000 ssh                                                    | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-648000 kubectl --                                             | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | --context functional-648000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 03:24:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:24:54.259463    7600 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:24:54.259621    7600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:54.259622    7600 out.go:358] Setting ErrFile to fd 2...
	I1216 03:24:54.259624    7600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:54.259745    7600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:24:54.260906    7600 out.go:352] Setting JSON to false
	I1216 03:24:54.279620    7600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5065,"bootTime":1734343229,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:24:54.279739    7600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:24:54.286229    7600 out.go:177] * [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:24:54.293162    7600 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:24:54.293219    7600 notify.go:220] Checking for updates...
	I1216 03:24:54.299148    7600 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:24:54.306192    7600 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:24:54.316131    7600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:24:54.320126    7600 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:24:54.323155    7600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:24:54.327459    7600 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:24:54.327513    7600 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:24:54.332172    7600 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:24:54.339090    7600 start.go:297] selected driver: qemu2
	I1216 03:24:54.339094    7600 start.go:901] validating driver "qemu2" against &{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:24:54.339143    7600 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:24:54.342029    7600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:24:54.342047    7600 cni.go:84] Creating CNI manager for ""
	I1216 03:24:54.342079    7600 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:24:54.342148    7600 start.go:340] cluster config:
	{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:24:54.346890    7600 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:24:54.355160    7600 out.go:177] * Starting "functional-648000" primary control-plane node in "functional-648000" cluster
	I1216 03:24:54.359143    7600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:24:54.359157    7600 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:24:54.359168    7600 cache.go:56] Caching tarball of preloaded images
	I1216 03:24:54.359259    7600 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:24:54.359262    7600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:24:54.359332    7600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/functional-648000/config.json ...
	I1216 03:24:54.359832    7600 start.go:360] acquireMachinesLock for functional-648000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:24:54.359883    7600 start.go:364] duration metric: took 46.5µs to acquireMachinesLock for "functional-648000"
	I1216 03:24:54.359890    7600 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:24:54.359894    7600 fix.go:54] fixHost starting: 
	I1216 03:24:54.360026    7600 fix.go:112] recreateIfNeeded on functional-648000: state=Stopped err=<nil>
	W1216 03:24:54.360035    7600 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:24:54.368176    7600 out.go:177] * Restarting existing qemu2 VM for "functional-648000" ...
	I1216 03:24:54.371090    7600 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:24:54.371144    7600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:36:db:52:2e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/disk.qcow2
	I1216 03:24:54.373512    7600 main.go:141] libmachine: STDOUT: 
	I1216 03:24:54.373525    7600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:24:54.373557    7600 fix.go:56] duration metric: took 13.66275ms for fixHost
	I1216 03:24:54.373560    7600 start.go:83] releasing machines lock for "functional-648000", held for 13.674709ms
	W1216 03:24:54.373565    7600 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:24:54.373605    7600 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:24:54.373609    7600 start.go:729] Will try again in 5 seconds ...
	I1216 03:24:59.375841    7600 start.go:360] acquireMachinesLock for functional-648000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:24:59.376272    7600 start.go:364] duration metric: took 349.917µs to acquireMachinesLock for "functional-648000"
	I1216 03:24:59.376418    7600 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:24:59.376431    7600 fix.go:54] fixHost starting: 
	I1216 03:24:59.377158    7600 fix.go:112] recreateIfNeeded on functional-648000: state=Stopped err=<nil>
	W1216 03:24:59.377178    7600 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:24:59.380557    7600 out.go:177] * Restarting existing qemu2 VM for "functional-648000" ...
	I1216 03:24:59.384563    7600 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:24:59.384783    7600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:36:db:52:2e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/disk.qcow2
	I1216 03:24:59.395095    7600 main.go:141] libmachine: STDOUT: 
	I1216 03:24:59.395132    7600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:24:59.395212    7600 fix.go:56] duration metric: took 18.784625ms for fixHost
	I1216 03:24:59.395226    7600 start.go:83] releasing machines lock for "functional-648000", held for 18.93925ms
	W1216 03:24:59.395445    7600 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:24:59.404580    7600 out.go:201] 
	W1216 03:24:59.408700    7600 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:24:59.408723    7600 out.go:270] * 
	W1216 03:24:59.411184    7600 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:24:59.418551    7600 out.go:201] 
	
	
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-648000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |                     |
|         | -p download-only-259000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:23 PST | 16 Dec 24 03:23 PST |
| delete  | -p download-only-259000                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST | 16 Dec 24 03:23 PST |
| start   | -o=json --download-only                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |                     |
|         | -p download-only-503000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| delete  | -p download-only-503000                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| delete  | -p download-only-259000                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| delete  | -p download-only-503000                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| start   | --download-only -p                                                       | binary-mirror-381000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | binary-mirror-381000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:60797                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-381000                                                  | binary-mirror-381000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| addons  | enable dashboard -p                                                      | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | addons-215000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | addons-215000                                                            |                      |         |         |                     |                     |
| start   | -p addons-215000 --wait=true                                             | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-215000                                                         | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| start   | -p nospam-451000 -n=1 --memory=2250 --wait=false                         | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-451000                                                         | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | minikube-local-cache-test:functional-648000                              |                      |         |         |                     |                     |
| cache   | functional-648000 cache delete                                           | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | minikube-local-cache-test:functional-648000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| ssh     | functional-648000 ssh sudo                                               | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-648000                                                        | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-648000 ssh                                                    | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-648000 cache reload                                           | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| ssh     | functional-648000 ssh                                                    | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-648000 kubectl --                                             | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --context functional-648000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/16 03:24:54
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1216 03:24:54.259463    7600 out.go:345] Setting OutFile to fd 1 ...
I1216 03:24:54.259621    7600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:24:54.259622    7600 out.go:358] Setting ErrFile to fd 2...
I1216 03:24:54.259624    7600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:24:54.259745    7600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:24:54.260906    7600 out.go:352] Setting JSON to false
I1216 03:24:54.279620    7600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5065,"bootTime":1734343229,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1216 03:24:54.279739    7600 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1216 03:24:54.286229    7600 out.go:177] * [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1216 03:24:54.293162    7600 out.go:177]   - MINIKUBE_LOCATION=20107
I1216 03:24:54.293219    7600 notify.go:220] Checking for updates...
I1216 03:24:54.299148    7600 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
I1216 03:24:54.306192    7600 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1216 03:24:54.316131    7600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1216 03:24:54.320126    7600 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
I1216 03:24:54.323155    7600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1216 03:24:54.327459    7600 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:24:54.327513    7600 driver.go:394] Setting default libvirt URI to qemu:///system
I1216 03:24:54.332172    7600 out.go:177] * Using the qemu2 driver based on existing profile
I1216 03:24:54.339090    7600 start.go:297] selected driver: qemu2
I1216 03:24:54.339094    7600 start.go:901] validating driver "qemu2" against &{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 03:24:54.339143    7600 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1216 03:24:54.342029    7600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 03:24:54.342047    7600 cni.go:84] Creating CNI manager for ""
I1216 03:24:54.342079    7600 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1216 03:24:54.342148    7600 start.go:340] cluster config:
{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 03:24:54.346890    7600 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 03:24:54.355160    7600 out.go:177] * Starting "functional-648000" primary control-plane node in "functional-648000" cluster
I1216 03:24:54.359143    7600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1216 03:24:54.359157    7600 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1216 03:24:54.359168    7600 cache.go:56] Caching tarball of preloaded images
I1216 03:24:54.359259    7600 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1216 03:24:54.359262    7600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1216 03:24:54.359332    7600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/functional-648000/config.json ...
I1216 03:24:54.359832    7600 start.go:360] acquireMachinesLock for functional-648000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1216 03:24:54.359883    7600 start.go:364] duration metric: took 46.5µs to acquireMachinesLock for "functional-648000"
I1216 03:24:54.359890    7600 start.go:96] Skipping create...Using existing machine configuration
I1216 03:24:54.359894    7600 fix.go:54] fixHost starting: 
I1216 03:24:54.360026    7600 fix.go:112] recreateIfNeeded on functional-648000: state=Stopped err=<nil>
W1216 03:24:54.360035    7600 fix.go:138] unexpected machine state, will restart: <nil>
I1216 03:24:54.368176    7600 out.go:177] * Restarting existing qemu2 VM for "functional-648000" ...
I1216 03:24:54.371090    7600 qemu.go:418] Using hvf for hardware acceleration
I1216 03:24:54.371144    7600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:36:db:52:2e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/disk.qcow2
I1216 03:24:54.373512    7600 main.go:141] libmachine: STDOUT: 
I1216 03:24:54.373525    7600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1216 03:24:54.373557    7600 fix.go:56] duration metric: took 13.66275ms for fixHost
I1216 03:24:54.373560    7600 start.go:83] releasing machines lock for "functional-648000", held for 13.674709ms
W1216 03:24:54.373565    7600 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1216 03:24:54.373605    7600 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1216 03:24:54.373609    7600 start.go:729] Will try again in 5 seconds ...
I1216 03:24:59.375841    7600 start.go:360] acquireMachinesLock for functional-648000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1216 03:24:59.376272    7600 start.go:364] duration metric: took 349.917µs to acquireMachinesLock for "functional-648000"
I1216 03:24:59.376418    7600 start.go:96] Skipping create...Using existing machine configuration
I1216 03:24:59.376431    7600 fix.go:54] fixHost starting: 
I1216 03:24:59.377158    7600 fix.go:112] recreateIfNeeded on functional-648000: state=Stopped err=<nil>
W1216 03:24:59.377178    7600 fix.go:138] unexpected machine state, will restart: <nil>
I1216 03:24:59.380557    7600 out.go:177] * Restarting existing qemu2 VM for "functional-648000" ...
I1216 03:24:59.384563    7600 qemu.go:418] Using hvf for hardware acceleration
I1216 03:24:59.384783    7600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:36:db:52:2e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/disk.qcow2
I1216 03:24:59.395095    7600 main.go:141] libmachine: STDOUT: 
I1216 03:24:59.395132    7600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1216 03:24:59.395212    7600 fix.go:56] duration metric: took 18.784625ms for fixHost
I1216 03:24:59.395226    7600 start.go:83] releasing machines lock for "functional-648000", held for 18.93925ms
W1216 03:24:59.395445    7600 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1216 03:24:59.404580    7600 out.go:201] 
W1216 03:24:59.408700    7600 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1216 03:24:59.408723    7600 out.go:270] * 
W1216 03:24:59.411184    7600 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 03:24:59.418551    7600 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1737559852/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |                     |
|         | -p download-only-259000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:23 PST | 16 Dec 24 03:23 PST |
| delete  | -p download-only-259000                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST | 16 Dec 24 03:23 PST |
| start   | -o=json --download-only                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |                     |
|         | -p download-only-503000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| delete  | -p download-only-503000                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| delete  | -p download-only-259000                                                  | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| delete  | -p download-only-503000                                                  | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| start   | --download-only -p                                                       | binary-mirror-381000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | binary-mirror-381000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:60797                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-381000                                                  | binary-mirror-381000 | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| addons  | enable dashboard -p                                                      | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | addons-215000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | addons-215000                                                            |                      |         |         |                     |                     |
| start   | -p addons-215000 --wait=true                                             | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-215000                                                         | addons-215000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| start   | -p nospam-451000 -n=1 --memory=2250 --wait=false                         | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-451000 --log_dir                                                  | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-451000                                                         | nospam-451000        | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-648000 cache add                                              | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | minikube-local-cache-test:functional-648000                              |                      |         |         |                     |                     |
| cache   | functional-648000 cache delete                                           | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | minikube-local-cache-test:functional-648000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| ssh     | functional-648000 ssh sudo                                               | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-648000                                                        | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-648000 ssh                                                    | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-648000 cache reload                                           | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
| ssh     | functional-648000 ssh                                                    | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:24 PST | 16 Dec 24 03:24 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-648000 kubectl --                                             | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --context functional-648000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-648000                                                     | functional-648000    | jenkins | v1.34.0 | 16 Dec 24 03:24 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/16 03:24:54
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1216 03:24:54.259463    7600 out.go:345] Setting OutFile to fd 1 ...
I1216 03:24:54.259621    7600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:24:54.259622    7600 out.go:358] Setting ErrFile to fd 2...
I1216 03:24:54.259624    7600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:24:54.259745    7600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:24:54.260906    7600 out.go:352] Setting JSON to false
I1216 03:24:54.279620    7600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5065,"bootTime":1734343229,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1216 03:24:54.279739    7600 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1216 03:24:54.286229    7600 out.go:177] * [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1216 03:24:54.293162    7600 out.go:177]   - MINIKUBE_LOCATION=20107
I1216 03:24:54.293219    7600 notify.go:220] Checking for updates...
I1216 03:24:54.299148    7600 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
I1216 03:24:54.306192    7600 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1216 03:24:54.316131    7600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1216 03:24:54.320126    7600 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
I1216 03:24:54.323155    7600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1216 03:24:54.327459    7600 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:24:54.327513    7600 driver.go:394] Setting default libvirt URI to qemu:///system
I1216 03:24:54.332172    7600 out.go:177] * Using the qemu2 driver based on existing profile
I1216 03:24:54.339090    7600 start.go:297] selected driver: qemu2
I1216 03:24:54.339094    7600 start.go:901] validating driver "qemu2" against &{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 03:24:54.339143    7600 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1216 03:24:54.342029    7600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 03:24:54.342047    7600 cni.go:84] Creating CNI manager for ""
I1216 03:24:54.342079    7600 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1216 03:24:54.342148    7600 start.go:340] cluster config:
{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 03:24:54.346890    7600 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 03:24:54.355160    7600 out.go:177] * Starting "functional-648000" primary control-plane node in "functional-648000" cluster
I1216 03:24:54.359143    7600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1216 03:24:54.359157    7600 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1216 03:24:54.359168    7600 cache.go:56] Caching tarball of preloaded images
I1216 03:24:54.359259    7600 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1216 03:24:54.359262    7600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1216 03:24:54.359332    7600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/functional-648000/config.json ...
I1216 03:24:54.359832    7600 start.go:360] acquireMachinesLock for functional-648000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1216 03:24:54.359883    7600 start.go:364] duration metric: took 46.5µs to acquireMachinesLock for "functional-648000"
I1216 03:24:54.359890    7600 start.go:96] Skipping create...Using existing machine configuration
I1216 03:24:54.359894    7600 fix.go:54] fixHost starting: 
I1216 03:24:54.360026    7600 fix.go:112] recreateIfNeeded on functional-648000: state=Stopped err=<nil>
W1216 03:24:54.360035    7600 fix.go:138] unexpected machine state, will restart: <nil>
I1216 03:24:54.368176    7600 out.go:177] * Restarting existing qemu2 VM for "functional-648000" ...
I1216 03:24:54.371090    7600 qemu.go:418] Using hvf for hardware acceleration
I1216 03:24:54.371144    7600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:36:db:52:2e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/disk.qcow2
I1216 03:24:54.373512    7600 main.go:141] libmachine: STDOUT: 
I1216 03:24:54.373525    7600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1216 03:24:54.373557    7600 fix.go:56] duration metric: took 13.66275ms for fixHost
I1216 03:24:54.373560    7600 start.go:83] releasing machines lock for "functional-648000", held for 13.674709ms
W1216 03:24:54.373565    7600 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1216 03:24:54.373605    7600 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1216 03:24:54.373609    7600 start.go:729] Will try again in 5 seconds ...
I1216 03:24:59.375841    7600 start.go:360] acquireMachinesLock for functional-648000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1216 03:24:59.376272    7600 start.go:364] duration metric: took 349.917µs to acquireMachinesLock for "functional-648000"
I1216 03:24:59.376418    7600 start.go:96] Skipping create...Using existing machine configuration
I1216 03:24:59.376431    7600 fix.go:54] fixHost starting: 
I1216 03:24:59.377158    7600 fix.go:112] recreateIfNeeded on functional-648000: state=Stopped err=<nil>
W1216 03:24:59.377178    7600 fix.go:138] unexpected machine state, will restart: <nil>
I1216 03:24:59.380557    7600 out.go:177] * Restarting existing qemu2 VM for "functional-648000" ...
I1216 03:24:59.384563    7600 qemu.go:418] Using hvf for hardware acceleration
I1216 03:24:59.384783    7600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:36:db:52:2e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/functional-648000/disk.qcow2
I1216 03:24:59.395095    7600 main.go:141] libmachine: STDOUT: 
I1216 03:24:59.395132    7600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1216 03:24:59.395212    7600 fix.go:56] duration metric: took 18.784625ms for fixHost
I1216 03:24:59.395226    7600 start.go:83] releasing machines lock for "functional-648000", held for 18.93925ms
W1216 03:24:59.395445    7600 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-648000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1216 03:24:59.404580    7600 out.go:201] 
W1216 03:24:59.408700    7600 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1216 03:24:59.408723    7600 out.go:270] * 
W1216 03:24:59.411184    7600 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 03:24:59.418551    7600 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-648000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-648000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.537ms)

                                                
                                                
** stderr ** 
	error: context "functional-648000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-648000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-648000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-648000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-648000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-648000 --alsologtostderr -v=1] stderr:
I1216 03:25:38.588329    7925 out.go:345] Setting OutFile to fd 1 ...
I1216 03:25:38.588804    7925 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:38.588808    7925 out.go:358] Setting ErrFile to fd 2...
I1216 03:25:38.588810    7925 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:38.588974    7925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:25:38.589184    7925 mustload.go:65] Loading cluster: functional-648000
I1216 03:25:38.589393    7925 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:25:38.594042    7925 out.go:177] * The control-plane node functional-648000 host is not running: state=Stopped
I1216 03:25:38.598008    7925 out.go:177]   To start a cluster, run: "minikube start -p functional-648000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (46.231458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 status: exit status 7 (34.609708ms)

                                                
                                                
-- stdout --
	functional-648000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-648000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (34.832208ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-648000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 status -o json: exit status 7 (34.671292ms)

                                                
                                                
-- stdout --
	{"Name":"functional-648000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-648000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (34.972666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-648000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-648000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.337459ms)

                                                
                                                
** stderr ** 
	error: context "functional-648000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-648000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-648000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-648000 describe po hello-node-connect: exit status 1 (26.553125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-648000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-648000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-648000 logs -l app=hello-node-connect: exit status 1 (26.52875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-648000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-648000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-648000 describe svc hello-node-connect: exit status 1 (27.205458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-648000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (35.446917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-648000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (34.766583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "echo hello": exit status 83 (46.037959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"\n"*. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "cat /etc/hostname": exit status 83 (57.661167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-648000"- but got *"* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"\n"*. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (35.135625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (56.167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-648000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 "sudo cat /home/docker/cp-test.txt": exit status 83 (47.931667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-648000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-648000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cp functional-648000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd173583843/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 cp functional-648000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd173583843/001/cp-test.txt: exit status 83 (45.855084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-648000 cp functional-648000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd173583843/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.712ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd173583843/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (45.901417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-648000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (52.89025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-648000 ssh -n functional-648000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-648000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-648000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7256/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/test/nested/copy/7256/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/test/nested/copy/7256/hosts": exit status 83 (46.783875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/test/nested/copy/7256/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-648000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-648000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (35.53175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7256.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/ssl/certs/7256.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/ssl/certs/7256.pem": exit status 83 (47.325333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7256.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"sudo cat /etc/ssl/certs/7256.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7256.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-648000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-648000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7256.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /usr/share/ca-certificates/7256.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /usr/share/ca-certificates/7256.pem": exit status 83 (51.635458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7256.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"sudo cat /usr/share/ca-certificates/7256.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7256.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-648000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-648000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (44.662541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-648000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-648000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/72562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/ssl/certs/72562.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/ssl/certs/72562.pem": exit status 83 (44.667083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/72562.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"sudo cat /etc/ssl/certs/72562.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/72562.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-648000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-648000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/72562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /usr/share/ca-certificates/72562.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /usr/share/ca-certificates/72562.pem": exit status 83 (44.726375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/72562.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"sudo cat /usr/share/ca-certificates/72562.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/72562.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-648000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-648000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (48.789208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-648000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-648000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-648000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (35.683625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-648000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-648000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.910667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-648000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-648000 -n functional-648000: exit status 7 (35.031583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-648000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo systemctl is-active crio": exit status 83 (47.28425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 version -o=json --components: exit status 83 (45.955833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-648000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-648000 image ls --format short --alsologtostderr:
I1216 03:25:39.030546    7944 out.go:345] Setting OutFile to fd 1 ...
I1216 03:25:39.030723    7944 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.030727    7944 out.go:358] Setting ErrFile to fd 2...
I1216 03:25:39.030729    7944 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.030896    7944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:25:39.031336    7944 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:25:39.031401    7944 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-648000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-648000 image ls --format table --alsologtostderr:
I1216 03:25:39.282239    7958 out.go:345] Setting OutFile to fd 1 ...
I1216 03:25:39.282435    7958 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.282438    7958 out.go:358] Setting ErrFile to fd 2...
I1216 03:25:39.282441    7958 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.282569    7958 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:25:39.282993    7958 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:25:39.283050    7958 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I1216 03:25:41.012560    7256 retry.go:31] will retry after 25.646104868s: Temporary Error: Get "http:": http: no Host in request URL
I1216 03:26:06.660657    7256 retry.go:31] will retry after 32.895480576s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-648000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-648000 image ls --format json --alsologtostderr:
I1216 03:25:39.240914    7956 out.go:345] Setting OutFile to fd 1 ...
I1216 03:25:39.241083    7956 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.241086    7956 out.go:358] Setting ErrFile to fd 2...
I1216 03:25:39.241088    7956 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.241235    7956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:25:39.241657    7956 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:25:39.241725    7956 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-648000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-648000 image ls --format yaml --alsologtostderr:
I1216 03:25:39.070736    7946 out.go:345] Setting OutFile to fd 1 ...
I1216 03:25:39.070914    7946 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.070917    7946 out.go:358] Setting ErrFile to fd 2...
I1216 03:25:39.070920    7946 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.071041    7946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:25:39.071462    7946 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:25:39.071522    7946 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh pgrep buildkitd: exit status 83 (48.008625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image build -t localhost/my-image:functional-648000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-648000 image build -t localhost/my-image:functional-648000 testdata/build --alsologtostderr:
I1216 03:25:39.159379    7952 out.go:345] Setting OutFile to fd 1 ...
I1216 03:25:39.159897    7952 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.159901    7952 out.go:358] Setting ErrFile to fd 2...
I1216 03:25:39.159908    7952 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:39.160055    7952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:25:39.160459    7952 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:25:39.160926    7952 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:25:39.161155    7952 build_images.go:133] succeeded building to: 
I1216 03:25:39.161158    7952 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls
functional_test.go:446: expected "localhost/my-image:functional-648000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-648000 docker-env) && out/minikube-darwin-arm64 status -p functional-648000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-648000 docker-env) && out/minikube-darwin-arm64 status -p functional-648000": exit status 1 (49.987167ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2: exit status 83 (45.848375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:25:38.887460    7936 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:25:38.887853    7936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.887857    7936 out.go:358] Setting ErrFile to fd 2...
	I1216 03:25:38.887860    7936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.887985    7936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:25:38.888198    7936 mustload.go:65] Loading cluster: functional-648000
	I1216 03:25:38.888407    7936 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:25:38.891839    7936 out.go:177] * The control-plane node functional-648000 host is not running: state=Stopped
	I1216 03:25:38.895812    7936 out.go:177]   To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2: exit status 83 (47.332208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:25:38.982557    7942 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:25:38.982721    7942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.982725    7942 out.go:358] Setting ErrFile to fd 2...
	I1216 03:25:38.982727    7942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.982865    7942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:25:38.983094    7942 mustload.go:65] Loading cluster: functional-648000
	I1216 03:25:38.983292    7942 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:25:38.987767    7942 out.go:177] * The control-plane node functional-648000 host is not running: state=Stopped
	I1216 03:25:38.991757    7942 out.go:177]   To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2: exit status 83 (47.477834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:25:38.934535    7940 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:25:38.934734    7940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.934738    7940 out.go:358] Setting ErrFile to fd 2...
	I1216 03:25:38.934740    7940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.934889    7940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:25:38.935128    7940 mustload.go:65] Loading cluster: functional-648000
	I1216 03:25:38.935368    7940 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:25:38.939900    7940 out.go:177] * The control-plane node functional-648000 host is not running: state=Stopped
	I1216 03:25:38.943637    7940 out.go:177]   To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-648000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-648000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-648000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.2425ms)

                                                
                                                
** stderr ** 
	error: context "functional-648000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-648000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 service list: exit status 83 (48.300958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-648000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 service list -o json: exit status 83 (46.92425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-648000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 service --namespace=default --https --url hello-node: exit status 83 (46.720584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-648000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 service hello-node --url --format={{.IP}}: exit status 83 (46.907667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-648000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 service hello-node --url: exit status 83 (47.792417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-648000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test.go:1569: failed to parse "* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"": parse "* The control-plane node functional-648000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-648000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1216 03:25:01.378062    7719 out.go:345] Setting OutFile to fd 1 ...
I1216 03:25:01.378251    7719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:01.378255    7719 out.go:358] Setting ErrFile to fd 2...
I1216 03:25:01.378257    7719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:25:01.378391    7719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:25:01.378689    7719 mustload.go:65] Loading cluster: functional-648000
I1216 03:25:01.378932    7719 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:25:01.382926    7719 out.go:177] * The control-plane node functional-648000 host is not running: state=Stopped
I1216 03:25:01.386050    7719 out.go:177]   To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
stdout: * The control-plane node functional-648000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-648000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7720: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-648000": client config: context "functional-648000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1216 03:25:01.462192    7256 retry.go:31] will retry after 4.355339212s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-648000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-648000 get svc nginx-svc: exit status 1 (71.452041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-648000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-648000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image load --daemon kicbase/echo-server:functional-648000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-648000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image load --daemon kicbase/echo-server:functional-648000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-648000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-648000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image load --daemon kicbase/echo-server:functional-648000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-648000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image save kicbase/echo-server:functional-648000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-648000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1216 03:26:39.646604    7256 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.034326s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 10 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1216 03:27:04.786061    7256 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:27:14.788365    7256 retry.go:31] will retry after 4.039244345s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1216 03:27:28.832011    7256 retry.go:31] will retry after 5.90780452s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:64355->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-173000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-173000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.931628292s)

                                                
                                                
-- stdout --
	* [ha-173000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-173000" primary control-plane node in "ha-173000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-173000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:27:35.200900    8032 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:27:35.201075    8032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:27:35.201079    8032 out.go:358] Setting ErrFile to fd 2...
	I1216 03:27:35.201081    8032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:27:35.201216    8032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:27:35.202388    8032 out.go:352] Setting JSON to false
	I1216 03:27:35.220243    8032 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5226,"bootTime":1734343229,"procs":573,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:27:35.220321    8032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:27:35.226238    8032 out.go:177] * [ha-173000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:27:35.234260    8032 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:27:35.234305    8032 notify.go:220] Checking for updates...
	I1216 03:27:35.241204    8032 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:27:35.244267    8032 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:27:35.247152    8032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:27:35.250244    8032 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:27:35.253244    8032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:27:35.254708    8032 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:27:35.258220    8032 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:27:35.265098    8032 start.go:297] selected driver: qemu2
	I1216 03:27:35.265107    8032 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:27:35.265114    8032 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:27:35.267679    8032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:27:35.272202    8032 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:27:35.275364    8032 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:27:35.275379    8032 cni.go:84] Creating CNI manager for ""
	I1216 03:27:35.275399    8032 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1216 03:27:35.275403    8032 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:27:35.275447    8032 start.go:340] cluster config:
	{Name:ha-173000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:27:35.280150    8032 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:27:35.288178    8032 out.go:177] * Starting "ha-173000" primary control-plane node in "ha-173000" cluster
	I1216 03:27:35.291173    8032 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:27:35.291195    8032 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:27:35.291210    8032 cache.go:56] Caching tarball of preloaded images
	I1216 03:27:35.291297    8032 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:27:35.291302    8032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:27:35.291502    8032 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/ha-173000/config.json ...
	I1216 03:27:35.291513    8032 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/ha-173000/config.json: {Name:mk69b8d2dd7ec0d1392f7ccf5f3ff32336148f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:27:35.291919    8032 start.go:360] acquireMachinesLock for ha-173000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:27:35.291967    8032 start.go:364] duration metric: took 42.333µs to acquireMachinesLock for "ha-173000"
	I1216 03:27:35.291979    8032 start.go:93] Provisioning new machine with config: &{Name:ha-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:27:35.292005    8032 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:27:35.295262    8032 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:27:35.311311    8032 start.go:159] libmachine.API.Create for "ha-173000" (driver="qemu2")
	I1216 03:27:35.311338    8032 client.go:168] LocalClient.Create starting
	I1216 03:27:35.311404    8032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:27:35.311440    8032 main.go:141] libmachine: Decoding PEM data...
	I1216 03:27:35.311451    8032 main.go:141] libmachine: Parsing certificate...
	I1216 03:27:35.311485    8032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:27:35.311516    8032 main.go:141] libmachine: Decoding PEM data...
	I1216 03:27:35.311525    8032 main.go:141] libmachine: Parsing certificate...
	I1216 03:27:35.312101    8032 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:27:35.472959    8032 main.go:141] libmachine: Creating SSH key...
	I1216 03:27:35.527818    8032 main.go:141] libmachine: Creating Disk image...
	I1216 03:27:35.527823    8032 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:27:35.528068    8032 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:27:35.537843    8032 main.go:141] libmachine: STDOUT: 
	I1216 03:27:35.537864    8032 main.go:141] libmachine: STDERR: 
	I1216 03:27:35.537927    8032 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2 +20000M
	I1216 03:27:35.546346    8032 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:27:35.546362    8032 main.go:141] libmachine: STDERR: 
	I1216 03:27:35.546374    8032 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:27:35.546379    8032 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:27:35.546390    8032 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:27:35.546419    8032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:8e:18:74:f9:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:27:35.548240    8032 main.go:141] libmachine: STDOUT: 
	I1216 03:27:35.548255    8032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:27:35.548274    8032 client.go:171] duration metric: took 236.933084ms to LocalClient.Create
	I1216 03:27:37.550520    8032 start.go:128] duration metric: took 2.25845s to createHost
	I1216 03:27:37.550599    8032 start.go:83] releasing machines lock for "ha-173000", held for 2.258652917s
	W1216 03:27:37.550638    8032 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:27:37.567085    8032 out.go:177] * Deleting "ha-173000" in qemu2 ...
	W1216 03:27:37.595845    8032 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:27:37.595923    8032 start.go:729] Will try again in 5 seconds ...
	I1216 03:27:42.598103    8032 start.go:360] acquireMachinesLock for ha-173000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:27:42.598674    8032 start.go:364] duration metric: took 418.416µs to acquireMachinesLock for "ha-173000"
	I1216 03:27:42.598799    8032 start.go:93] Provisioning new machine with config: &{Name:ha-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:27:42.599095    8032 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:27:42.605786    8032 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:27:42.655035    8032 start.go:159] libmachine.API.Create for "ha-173000" (driver="qemu2")
	I1216 03:27:42.655097    8032 client.go:168] LocalClient.Create starting
	I1216 03:27:42.655287    8032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:27:42.655374    8032 main.go:141] libmachine: Decoding PEM data...
	I1216 03:27:42.655391    8032 main.go:141] libmachine: Parsing certificate...
	I1216 03:27:42.655460    8032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:27:42.655521    8032 main.go:141] libmachine: Decoding PEM data...
	I1216 03:27:42.655541    8032 main.go:141] libmachine: Parsing certificate...
	I1216 03:27:42.656153    8032 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:27:42.836604    8032 main.go:141] libmachine: Creating SSH key...
	I1216 03:27:43.025123    8032 main.go:141] libmachine: Creating Disk image...
	I1216 03:27:43.025130    8032 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:27:43.025394    8032 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:27:43.035953    8032 main.go:141] libmachine: STDOUT: 
	I1216 03:27:43.035971    8032 main.go:141] libmachine: STDERR: 
	I1216 03:27:43.036029    8032 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2 +20000M
	I1216 03:27:43.044607    8032 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:27:43.044627    8032 main.go:141] libmachine: STDERR: 
	I1216 03:27:43.044637    8032 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:27:43.044643    8032 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:27:43.044649    8032 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:27:43.044686    8032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:03:26:1f:80:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:27:43.046582    8032 main.go:141] libmachine: STDOUT: 
	I1216 03:27:43.046596    8032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:27:43.046607    8032 client.go:171] duration metric: took 391.497625ms to LocalClient.Create
	I1216 03:27:45.048802    8032 start.go:128] duration metric: took 2.449693625s to createHost
	I1216 03:27:45.048878    8032 start.go:83] releasing machines lock for "ha-173000", held for 2.450195959s
	W1216 03:27:45.049300    8032 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-173000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-173000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:27:45.063945    8032 out.go:201] 
	W1216 03:27:45.069021    8032 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:27:45.069058    8032 out.go:270] * 
	* 
	W1216 03:27:45.071604    8032 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:27:45.086018    8032 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-173000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (77.467375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (80.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.584167ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-173000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- rollout status deployment/busybox: exit status 1 (62.40925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.798833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:27:45.369586    7256 retry.go:31] will retry after 691.104128ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.444708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:27:46.172438    7256 retry.go:31] will retry after 1.024736811s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.743125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:27:47.308187    7256 retry.go:31] will retry after 1.242092552s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.447417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:27:48.660100    7256 retry.go:31] will retry after 3.219512861s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.174834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:27:51.991236    7256 retry.go:31] will retry after 3.103505945s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.265291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:27:55.205424    7256 retry.go:31] will retry after 5.219791304s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.375042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:28:00.537833    7256 retry.go:31] will retry after 6.774318923s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.223083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:28:07.423861    7256 retry.go:31] will retry after 19.195611428s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.590375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:28:26.731215    7256 retry.go:31] will retry after 38.123306997s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.463958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.921541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.88825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.903167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.361167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (34.937125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (80.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-173000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.193333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-173000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (34.962833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-173000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-173000 -v=7 --alsologtostderr: exit status 83 (48.824917ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-173000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-173000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:05.386096    8159 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:05.386465    8159 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.386474    8159 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:05.386476    8159 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.386619    8159 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:05.386835    8159 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:05.387032    8159 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:05.391718    8159 out.go:177] * The control-plane node ha-173000 host is not running: state=Stopped
	I1216 03:29:05.396761    8159 out.go:177]   To start a cluster, run: "minikube start -p ha-173000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-173000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (34.477042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-173000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-173000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.08975ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-173000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-173000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-173000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (35.229583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-173000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-173000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (35.501167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status --output json -v=7 --alsologtostderr: exit status 7 (35.281625ms)

                                                
                                                
-- stdout --
	{"Name":"ha-173000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:05.620995    8171 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:05.621194    8171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.621197    8171 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:05.621200    8171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.621325    8171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:05.621487    8171 out.go:352] Setting JSON to true
	I1216 03:29:05.621498    8171 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:05.621558    8171 notify.go:220] Checking for updates...
	I1216 03:29:05.621690    8171 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:05.621696    8171 status.go:174] checking status of ha-173000 ...
	I1216 03:29:05.621929    8171 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:05.621933    8171 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:05.621935    8171 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-173000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (34.560125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 node stop m02 -v=7 --alsologtostderr: exit status 85 (54.535083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:05.691797    8175 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:05.692232    8175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.692236    8175 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:05.692238    8175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.692361    8175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:05.692597    8175 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:05.692797    8175 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:05.697635    8175 out.go:201] 
	W1216 03:29:05.700669    8175 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1216 03:29:05.700675    8175 out.go:270] * 
	* 
	W1216 03:29:05.702619    8175 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:29:05.707652    8175 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-173000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (34.798583ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:05.745726    8177 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:05.745919    8177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.745923    8177 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:05.745925    8177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.746058    8177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:05.746187    8177 out.go:352] Setting JSON to false
	I1216 03:29:05.746197    8177 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:05.746264    8177 notify.go:220] Checking for updates...
	I1216 03:29:05.746410    8177 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:05.746420    8177 status.go:174] checking status of ha-173000 ...
	I1216 03:29:05.746670    8177 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:05.746674    8177 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:05.746676    8177 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr": ha-173000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr": ha-173000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr": ha-173000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr": ha-173000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (35.575709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-173000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (35.046125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 node start m02 -v=7 --alsologtostderr: exit status 85 (51.545417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:05.905741    8186 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:05.906158    8186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.906162    8186 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:05.906165    8186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.906348    8186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:05.906587    8186 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:05.906784    8186 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:05.909695    8186 out.go:201] 
	W1216 03:29:05.912655    8186 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1216 03:29:05.912660    8186 out.go:270] * 
	* 
	W1216 03:29:05.914469    8186 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:29:05.918646    8186 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1216 03:29:05.905741    8186 out.go:345] Setting OutFile to fd 1 ...
I1216 03:29:05.906158    8186 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:29:05.906162    8186 out.go:358] Setting ErrFile to fd 2...
I1216 03:29:05.906165    8186 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:29:05.906348    8186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:29:05.906587    8186 mustload.go:65] Loading cluster: ha-173000
I1216 03:29:05.906784    8186 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:29:05.909695    8186 out.go:201] 
W1216 03:29:05.912655    8186 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1216 03:29:05.912660    8186 out.go:270] * 
* 
W1216 03:29:05.914469    8186 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 03:29:05.918646    8186 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-173000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (35.558625ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:05.957369    8188 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:05.957551    8188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.957554    8188 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:05.957557    8188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:05.957703    8188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:05.957818    8188 out.go:352] Setting JSON to false
	I1216 03:29:05.957828    8188 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:05.957889    8188 notify.go:220] Checking for updates...
	I1216 03:29:05.958063    8188 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:05.958072    8188 status.go:174] checking status of ha-173000 ...
	I1216 03:29:05.958318    8188 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:05.958321    8188 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:05.958323    8188 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:29:05.959257    7256 retry.go:31] will retry after 544.270722ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (78.657792ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:06.582166    8190 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:06.582398    8190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:06.582403    8190 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:06.582407    8190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:06.582592    8190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:06.582751    8190 out.go:352] Setting JSON to false
	I1216 03:29:06.582764    8190 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:06.582809    8190 notify.go:220] Checking for updates...
	I1216 03:29:06.583085    8190 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:06.583111    8190 status.go:174] checking status of ha-173000 ...
	I1216 03:29:06.583448    8190 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:06.583453    8190 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:06.583455    8190 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:29:06.584550    7256 retry.go:31] will retry after 1.38259856s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (78.682333ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:08.046208    8192 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:08.046423    8192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:08.046427    8192 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:08.046430    8192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:08.046608    8192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:08.046749    8192 out.go:352] Setting JSON to false
	I1216 03:29:08.046760    8192 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:08.046795    8192 notify.go:220] Checking for updates...
	I1216 03:29:08.046994    8192 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:08.047002    8192 status.go:174] checking status of ha-173000 ...
	I1216 03:29:08.047285    8192 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:08.047289    8192 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:08.047292    8192 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:29:08.048325    7256 retry.go:31] will retry after 3.132457728s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (79.262083ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:11.260061    8196 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:11.260323    8196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:11.260328    8196 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:11.260331    8196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:11.260508    8196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:11.260685    8196 out.go:352] Setting JSON to false
	I1216 03:29:11.260697    8196 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:11.260740    8196 notify.go:220] Checking for updates...
	I1216 03:29:11.261038    8196 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:11.261061    8196 status.go:174] checking status of ha-173000 ...
	I1216 03:29:11.261383    8196 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:11.261389    8196 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:11.261392    8196 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:29:11.262514    7256 retry.go:31] will retry after 2.150402308s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (79.610125ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:13.492692    8200 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:13.492947    8200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:13.492951    8200 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:13.492955    8200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:13.493105    8200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:13.493276    8200 out.go:352] Setting JSON to false
	I1216 03:29:13.493289    8200 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:13.493335    8200 notify.go:220] Checking for updates...
	I1216 03:29:13.493570    8200 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:13.493579    8200 status.go:174] checking status of ha-173000 ...
	I1216 03:29:13.493900    8200 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:13.493905    8200 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:13.493907    8200 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:29:13.494951    7256 retry.go:31] will retry after 4.676292613s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (78.769125ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:18.250017    8204 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:18.250265    8204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:18.250269    8204 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:18.250273    8204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:18.250431    8204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:18.250611    8204 out.go:352] Setting JSON to false
	I1216 03:29:18.250624    8204 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:18.250669    8204 notify.go:220] Checking for updates...
	I1216 03:29:18.250913    8204 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:18.250921    8204 status.go:174] checking status of ha-173000 ...
	I1216 03:29:18.251242    8204 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:18.251247    8204 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:18.251250    8204 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:29:18.252328    7256 retry.go:31] will retry after 8.92941654s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (78.577375ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:27.258568    8210 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:27.258797    8210 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:27.258801    8210 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:27.258805    8210 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:27.258991    8210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:27.259147    8210 out.go:352] Setting JSON to false
	I1216 03:29:27.259160    8210 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:27.259205    8210 notify.go:220] Checking for updates...
	I1216 03:29:27.259422    8210 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:27.259431    8210 status.go:174] checking status of ha-173000 ...
	I1216 03:29:27.259759    8210 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:27.259764    8210 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:27.259767    8210 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:29:27.260905    7256 retry.go:31] will retry after 15.334028566s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (78.585709ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:42.673642    8219 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:42.673886    8219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:42.673891    8219 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:42.673893    8219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:42.674085    8219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:42.674268    8219 out.go:352] Setting JSON to false
	I1216 03:29:42.674280    8219 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:42.674317    8219 notify.go:220] Checking for updates...
	I1216 03:29:42.674555    8219 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:42.674563    8219 status.go:174] checking status of ha-173000 ...
	I1216 03:29:42.674876    8219 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:42.674881    8219 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:42.674884    8219 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:29:42.675939    7256 retry.go:31] will retry after 10.066266166s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (80.535708ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:52.822846    8227 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:52.823080    8227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:52.823085    8227 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:52.823088    8227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:52.823238    8227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:52.823404    8227 out.go:352] Setting JSON to false
	I1216 03:29:52.823417    8227 mustload.go:65] Loading cluster: ha-173000
	I1216 03:29:52.823454    8227 notify.go:220] Checking for updates...
	I1216 03:29:52.823743    8227 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:52.823755    8227 status.go:174] checking status of ha-173000 ...
	I1216 03:29:52.824081    8227 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:29:52.824087    8227 status.go:384] host is not running, skipping remaining checks
	I1216 03:29:52.824089    8227 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (37.241792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (46.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-173000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-173000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (34.886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-173000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-173000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-173000 -v=7 --alsologtostderr: (3.661314625s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-173000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-173000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.235126584s)

                                                
                                                
-- stdout --
	* [ha-173000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-173000" primary control-plane node in "ha-173000" cluster
	* Restarting existing qemu2 VM for "ha-173000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-173000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:56.719186    8261 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:29:56.719383    8261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:56.719387    8261 out.go:358] Setting ErrFile to fd 2...
	I1216 03:29:56.719389    8261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:56.719563    8261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:29:56.720777    8261 out.go:352] Setting JSON to false
	I1216 03:29:56.740999    8261 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5367,"bootTime":1734343229,"procs":576,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:29:56.741072    8261 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:29:56.745824    8261 out.go:177] * [ha-173000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:29:56.753775    8261 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:29:56.753847    8261 notify.go:220] Checking for updates...
	I1216 03:29:56.760631    8261 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:29:56.763704    8261 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:29:56.767696    8261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:29:56.770727    8261 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:29:56.773752    8261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:29:56.777041    8261 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:29:56.777104    8261 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:29:56.780743    8261 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:29:56.787725    8261 start.go:297] selected driver: qemu2
	I1216 03:29:56.787731    8261 start.go:901] validating driver "qemu2" against &{Name:ha-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:29:56.787775    8261 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:29:56.790321    8261 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:29:56.790348    8261 cni.go:84] Creating CNI manager for ""
	I1216 03:29:56.790372    8261 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1216 03:29:56.790426    8261 start.go:340] cluster config:
	{Name:ha-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-173000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:29:56.795130    8261 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:29:56.803729    8261 out.go:177] * Starting "ha-173000" primary control-plane node in "ha-173000" cluster
	I1216 03:29:56.807785    8261 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:29:56.807802    8261 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:29:56.807816    8261 cache.go:56] Caching tarball of preloaded images
	I1216 03:29:56.807902    8261 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:29:56.807908    8261 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:29:56.807973    8261 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/ha-173000/config.json ...
	I1216 03:29:56.808424    8261 start.go:360] acquireMachinesLock for ha-173000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:29:56.808472    8261 start.go:364] duration metric: took 42.583µs to acquireMachinesLock for "ha-173000"
	I1216 03:29:56.808481    8261 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:29:56.808486    8261 fix.go:54] fixHost starting: 
	I1216 03:29:56.808612    8261 fix.go:112] recreateIfNeeded on ha-173000: state=Stopped err=<nil>
	W1216 03:29:56.808620    8261 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:29:56.816698    8261 out.go:177] * Restarting existing qemu2 VM for "ha-173000" ...
	I1216 03:29:56.823803    8261 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:29:56.823852    8261 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:03:26:1f:80:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:29:56.826196    8261 main.go:141] libmachine: STDOUT: 
	I1216 03:29:56.826224    8261 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:29:56.826258    8261 fix.go:56] duration metric: took 17.771708ms for fixHost
	I1216 03:29:56.826263    8261 start.go:83] releasing machines lock for "ha-173000", held for 17.786375ms
	W1216 03:29:56.826270    8261 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:29:56.826318    8261 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:29:56.826323    8261 start.go:729] Will try again in 5 seconds ...
	I1216 03:30:01.828432    8261 start.go:360] acquireMachinesLock for ha-173000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:30:01.828781    8261 start.go:364] duration metric: took 254.5µs to acquireMachinesLock for "ha-173000"
	I1216 03:30:01.828908    8261 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:30:01.828920    8261 fix.go:54] fixHost starting: 
	I1216 03:30:01.829346    8261 fix.go:112] recreateIfNeeded on ha-173000: state=Stopped err=<nil>
	W1216 03:30:01.829361    8261 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:30:01.837813    8261 out.go:177] * Restarting existing qemu2 VM for "ha-173000" ...
	I1216 03:30:01.841704    8261 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:30:01.841884    8261 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:03:26:1f:80:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:30:01.848127    8261 main.go:141] libmachine: STDOUT: 
	I1216 03:30:01.848192    8261 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:30:01.848249    8261 fix.go:56] duration metric: took 19.32925ms for fixHost
	I1216 03:30:01.848262    8261 start.go:83] releasing machines lock for "ha-173000", held for 19.434334ms
	W1216 03:30:01.848379    8261 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-173000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-173000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:30:01.856858    8261 out.go:201] 
	W1216 03:30:01.860945    8261 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:30:01.860973    8261 out.go:270] * 
	* 
	W1216 03:30:01.862169    8261 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:30:01.872816    8261 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-173000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-173000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (37.959208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.256709ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-173000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-173000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:30:02.028468    8421 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:30:02.028851    8421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:02.028854    8421 out.go:358] Setting ErrFile to fd 2...
	I1216 03:30:02.028857    8421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:02.028977    8421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:30:02.029163    8421 mustload.go:65] Loading cluster: ha-173000
	I1216 03:30:02.029373    8421 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:30:02.033979    8421 out.go:177] * The control-plane node ha-173000 host is not running: state=Stopped
	I1216 03:30:02.037049    8421 out.go:177]   To start a cluster, run: "minikube start -p ha-173000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-173000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (33.738458ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:30:02.072944    8423 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:30:02.073112    8423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:02.073115    8423 out.go:358] Setting ErrFile to fd 2...
	I1216 03:30:02.073118    8423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:02.073251    8423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:30:02.073379    8423 out.go:352] Setting JSON to false
	I1216 03:30:02.073389    8423 mustload.go:65] Loading cluster: ha-173000
	I1216 03:30:02.073436    8423 notify.go:220] Checking for updates...
	I1216 03:30:02.073584    8423 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:30:02.073591    8423 status.go:174] checking status of ha-173000 ...
	I1216 03:30:02.073855    8423 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:30:02.073859    8423 status.go:384] host is not running, skipping remaining checks
	I1216 03:30:02.073863    8423 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (33.925125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-173000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (33.342125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-173000 stop -v=7 --alsologtostderr: (2.006917708s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr: exit status 7 (72.759833ms)

                                                
                                                
-- stdout --
	ha-173000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:30:04.271047    8566 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:30:04.271286    8566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:04.271291    8566 out.go:358] Setting ErrFile to fd 2...
	I1216 03:30:04.271294    8566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:04.271485    8566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:30:04.271659    8566 out.go:352] Setting JSON to false
	I1216 03:30:04.271672    8566 mustload.go:65] Loading cluster: ha-173000
	I1216 03:30:04.271718    8566 notify.go:220] Checking for updates...
	I1216 03:30:04.271940    8566 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:30:04.271949    8566 status.go:174] checking status of ha-173000 ...
	I1216 03:30:04.272260    8566 status.go:371] ha-173000 host status = "Stopped" (err=<nil>)
	I1216 03:30:04.272265    8566 status.go:384] host is not running, skipping remaining checks
	I1216 03:30:04.272267    8566 status.go:176] ha-173000 status: &{Name:ha-173000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr": ha-173000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr": ha-173000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-173000 status -v=7 --alsologtostderr": ha-173000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (37.537625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-173000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-173000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.192897417s)

                                                
                                                
-- stdout --
	* [ha-173000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-173000" primary control-plane node in "ha-173000" cluster
	* Restarting existing qemu2 VM for "ha-173000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-173000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:30:04.343540    8570 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:30:04.343715    8570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:04.343719    8570 out.go:358] Setting ErrFile to fd 2...
	I1216 03:30:04.343721    8570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:04.343839    8570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:30:04.345015    8570 out.go:352] Setting JSON to false
	I1216 03:30:04.362838    8570 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5375,"bootTime":1734343229,"procs":572,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:30:04.362910    8570 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:30:04.366999    8570 out.go:177] * [ha-173000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:30:04.373848    8570 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:30:04.373918    8570 notify.go:220] Checking for updates...
	I1216 03:30:04.380705    8570 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:30:04.384707    8570 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:30:04.387800    8570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:30:04.390841    8570 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:30:04.393802    8570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:30:04.397071    8570 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:30:04.397378    8570 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:30:04.401713    8570 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:30:04.408784    8570 start.go:297] selected driver: qemu2
	I1216 03:30:04.408791    8570 start.go:901] validating driver "qemu2" against &{Name:ha-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:30:04.408844    8570 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:30:04.411349    8570 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:30:04.411374    8570 cni.go:84] Creating CNI manager for ""
	I1216 03:30:04.411394    8570 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1216 03:30:04.411451    8570 start.go:340] cluster config:
	{Name:ha-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-173000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:30:04.416088    8570 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:30:04.424719    8570 out.go:177] * Starting "ha-173000" primary control-plane node in "ha-173000" cluster
	I1216 03:30:04.427737    8570 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:30:04.427751    8570 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:30:04.427764    8570 cache.go:56] Caching tarball of preloaded images
	I1216 03:30:04.427814    8570 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:30:04.427819    8570 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:30:04.427868    8570 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/ha-173000/config.json ...
	I1216 03:30:04.428284    8570 start.go:360] acquireMachinesLock for ha-173000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:30:04.428315    8570 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "ha-173000"
	I1216 03:30:04.428323    8570 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:30:04.428329    8570 fix.go:54] fixHost starting: 
	I1216 03:30:04.428452    8570 fix.go:112] recreateIfNeeded on ha-173000: state=Stopped err=<nil>
	W1216 03:30:04.428462    8570 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:30:04.436661    8570 out.go:177] * Restarting existing qemu2 VM for "ha-173000" ...
	I1216 03:30:04.440750    8570 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:30:04.440783    8570 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:03:26:1f:80:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:30:04.442971    8570 main.go:141] libmachine: STDOUT: 
	I1216 03:30:04.442989    8570 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:30:04.443019    8570 fix.go:56] duration metric: took 14.690375ms for fixHost
	I1216 03:30:04.443024    8570 start.go:83] releasing machines lock for "ha-173000", held for 14.704875ms
	W1216 03:30:04.443029    8570 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:30:04.443061    8570 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:30:04.443066    8570 start.go:729] Will try again in 5 seconds ...
	I1216 03:30:09.445232    8570 start.go:360] acquireMachinesLock for ha-173000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:30:09.445672    8570 start.go:364] duration metric: took 319µs to acquireMachinesLock for "ha-173000"
	I1216 03:30:09.445816    8570 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:30:09.445833    8570 fix.go:54] fixHost starting: 
	I1216 03:30:09.446569    8570 fix.go:112] recreateIfNeeded on ha-173000: state=Stopped err=<nil>
	W1216 03:30:09.446590    8570 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:30:09.451949    8570 out.go:177] * Restarting existing qemu2 VM for "ha-173000" ...
	I1216 03:30:09.458962    8570 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:30:09.459203    8570 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:03:26:1f:80:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/ha-173000/disk.qcow2
	I1216 03:30:09.468104    8570 main.go:141] libmachine: STDOUT: 
	I1216 03:30:09.468149    8570 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:30:09.468226    8570 fix.go:56] duration metric: took 22.396375ms for fixHost
	I1216 03:30:09.468241    8570 start.go:83] releasing machines lock for "ha-173000", held for 22.548083ms
	W1216 03:30:09.468415    8570 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-173000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-173000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:30:09.476764    8570 out.go:201] 
	W1216 03:30:09.480944    8570 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:30:09.480960    8570 out.go:270] * 
	* 
	W1216 03:30:09.482943    8570 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:30:09.491989    8570 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-173000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (71.669917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-173000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (34.624417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-173000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-173000 --control-plane -v=7 --alsologtostderr: exit status 83 (46.227666ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-173000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-173000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:30:09.700627    8599 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:30:09.700824    8599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:09.700828    8599 out.go:358] Setting ErrFile to fd 2...
	I1216 03:30:09.700830    8599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:09.700952    8599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:30:09.701194    8599 mustload.go:65] Loading cluster: ha-173000
	I1216 03:30:09.701400    8599 config.go:182] Loaded profile config "ha-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:30:09.705900    8599 out.go:177] * The control-plane node ha-173000 host is not running: state=Stopped
	I1216 03:30:09.708808    8599 out.go:177]   To start a cluster, run: "minikube start -p ha-173000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-173000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (35.125167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-173000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-173000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-173000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-173000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-173000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-173000 -n ha-173000: exit status 7 (34.774458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-173000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-786000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-786000 --driver=qemu2 : exit status 80 (9.943498333s)

                                                
                                                
-- stdout --
	* [image-786000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-786000" primary control-plane node in "image-786000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-786000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-786000 -n image-786000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-786000 -n image-786000: exit status 7 (73.9235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-786000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-736000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-736000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.837389125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"966d85e8-d7ab-4f4b-95ce-a95089f09da0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cdc1a300-5ce6-44e5-a549-9182ee707861","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20107"}}
	{"specversion":"1.0","id":"9a908abc-3cf5-49be-be46-b2b0befc0e10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig"}}
	{"specversion":"1.0","id":"0adc3cd2-1ea4-4a0d-a1ab-e1ff86f34af5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3cc71822-ed16-4982-9012-81a68c786db7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a9bb0a39-122e-4698-b7f7-7d33de8ddcc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube"}}
	{"specversion":"1.0","id":"c16b02e1-1320-49f9-b347-46eea9d4bdd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3cb0ffd-5d54-4165-84f1-6e2b4aa39de8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1dba18b9-440b-4da3-b5ee-ddaaec904bc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0b24ec6e-e11a-4cd2-b790-5840ca833ca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-736000\" primary control-plane node in \"json-output-736000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"10b84cc2-36d0-426c-b91c-84ef824c2df2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a4a3c8ea-3220-4883-8f20-31a3810f491b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-736000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"2cf2f7f3-25fd-4446-b203-b25787412980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"eb5fbe75-d4c2-44ad-8ee9-3605547ab3cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"ffdd27e1-c1e1-47b9-951b-fe432bc0fb08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-736000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a51560b9-9b30-4855-aec0-d8494c203707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"72b0e386-4390-4056-9dfc-288195a38f63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-736000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-736000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-736000 --output=json --user=testUser: exit status 83 (84.295ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5c8ee162-7ee4-4761-b83e-125acab70d75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-736000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"08e94fc7-c629-4d05-a07a-ceaa829f6717","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-736000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-736000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-736000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-736000 --output=json --user=testUser: exit status 83 (50.420417ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-736000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-736000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-736000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-736000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-671000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-671000 --driver=qemu2 : exit status 80 (9.897339625s)

                                                
                                                
-- stdout --
	* [first-671000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-671000" primary control-plane node in "first-671000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-671000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-16 03:30:43.033446 -0800 PST m=+432.805714710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-673000 -n second-673000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-673000 -n second-673000: exit status 85 (85.052292ms)

                                                
                                                
-- stdout --
	* Profile "second-673000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-673000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-673000" host is not running, skipping log retrieval (state="* Profile \"second-673000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-673000\"")
helpers_test.go:175: Cleaning up "second-673000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-673000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-16 03:30:43.235669 -0800 PST m=+433.007940668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-671000 -n first-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-671000 -n first-671000: exit status 7 (35.426209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-671000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-671000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-671000
--- FAIL: TestMinikubeProfile (10.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-782000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-782000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.914625166s)

                                                
                                                
-- stdout --
	* [mount-start-1-782000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-782000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-782000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-782000 -n mount-start-1-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-782000 -n mount-start-1-782000: exit status 7 (74.633625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-791000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-791000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.938380084s)

                                                
                                                
-- stdout --
	* [multinode-791000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-791000" primary control-plane node in "multinode-791000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-791000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:30:53.579827    8756 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:30:53.579976    8756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:53.579980    8756 out.go:358] Setting ErrFile to fd 2...
	I1216 03:30:53.579982    8756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:30:53.580115    8756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:30:53.581140    8756 out.go:352] Setting JSON to false
	I1216 03:30:53.598896    8756 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5424,"bootTime":1734343229,"procs":577,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:30:53.598983    8756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:30:53.606769    8756 out.go:177] * [multinode-791000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:30:53.614718    8756 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:30:53.614826    8756 notify.go:220] Checking for updates...
	I1216 03:30:53.622608    8756 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:30:53.625723    8756 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:30:53.628738    8756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:30:53.630293    8756 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:30:53.633745    8756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:30:53.636854    8756 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:30:53.641538    8756 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:30:53.649442    8756 start.go:297] selected driver: qemu2
	I1216 03:30:53.649450    8756 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:30:53.649458    8756 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:30:53.652046    8756 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:30:53.656604    8756 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:30:53.659820    8756 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:30:53.659837    8756 cni.go:84] Creating CNI manager for ""
	I1216 03:30:53.659858    8756 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1216 03:30:53.659863    8756 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:30:53.659895    8756 start.go:340] cluster config:
	{Name:multinode-791000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:30:53.664876    8756 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:30:53.672631    8756 out.go:177] * Starting "multinode-791000" primary control-plane node in "multinode-791000" cluster
	I1216 03:30:53.676740    8756 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:30:53.676758    8756 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:30:53.676767    8756 cache.go:56] Caching tarball of preloaded images
	I1216 03:30:53.676859    8756 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:30:53.676865    8756 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:30:53.677078    8756 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/multinode-791000/config.json ...
	I1216 03:30:53.677090    8756 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/multinode-791000/config.json: {Name:mk4b94f101b0519d6e392913059df6a2e6907531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:30:53.677547    8756 start.go:360] acquireMachinesLock for multinode-791000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:30:53.677599    8756 start.go:364] duration metric: took 46.167µs to acquireMachinesLock for "multinode-791000"
	I1216 03:30:53.677611    8756 start.go:93] Provisioning new machine with config: &{Name:multinode-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:30:53.677638    8756 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:30:53.686688    8756 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:30:53.704848    8756 start.go:159] libmachine.API.Create for "multinode-791000" (driver="qemu2")
	I1216 03:30:53.704875    8756 client.go:168] LocalClient.Create starting
	I1216 03:30:53.704957    8756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:30:53.705001    8756 main.go:141] libmachine: Decoding PEM data...
	I1216 03:30:53.705010    8756 main.go:141] libmachine: Parsing certificate...
	I1216 03:30:53.705051    8756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:30:53.705087    8756 main.go:141] libmachine: Decoding PEM data...
	I1216 03:30:53.705096    8756 main.go:141] libmachine: Parsing certificate...
	I1216 03:30:53.705600    8756 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:30:53.869296    8756 main.go:141] libmachine: Creating SSH key...
	I1216 03:30:53.917685    8756 main.go:141] libmachine: Creating Disk image...
	I1216 03:30:53.917690    8756 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:30:53.917911    8756 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:30:53.927638    8756 main.go:141] libmachine: STDOUT: 
	I1216 03:30:53.927655    8756 main.go:141] libmachine: STDERR: 
	I1216 03:30:53.927714    8756 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2 +20000M
	I1216 03:30:53.936217    8756 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:30:53.936235    8756 main.go:141] libmachine: STDERR: 
	I1216 03:30:53.936253    8756 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:30:53.936258    8756 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:30:53.936271    8756 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:30:53.936304    8756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:9c:fe:b7:d0:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:30:53.938157    8756 main.go:141] libmachine: STDOUT: 
	I1216 03:30:53.938171    8756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:30:53.938189    8756 client.go:171] duration metric: took 233.310625ms to LocalClient.Create
	I1216 03:30:55.940373    8756 start.go:128] duration metric: took 2.262731166s to createHost
	I1216 03:30:55.940532    8756 start.go:83] releasing machines lock for "multinode-791000", held for 2.262864541s
	W1216 03:30:55.940650    8756 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:30:55.950023    8756 out.go:177] * Deleting "multinode-791000" in qemu2 ...
	W1216 03:30:55.980241    8756 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:30:55.980268    8756 start.go:729] Will try again in 5 seconds ...
	I1216 03:31:00.982471    8756 start.go:360] acquireMachinesLock for multinode-791000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:31:00.983009    8756 start.go:364] duration metric: took 437.959µs to acquireMachinesLock for "multinode-791000"
	I1216 03:31:00.983158    8756 start.go:93] Provisioning new machine with config: &{Name:multinode-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:31:00.983454    8756 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:31:01.002461    8756 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:31:01.050834    8756 start.go:159] libmachine.API.Create for "multinode-791000" (driver="qemu2")
	I1216 03:31:01.050893    8756 client.go:168] LocalClient.Create starting
	I1216 03:31:01.051042    8756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:31:01.051149    8756 main.go:141] libmachine: Decoding PEM data...
	I1216 03:31:01.051166    8756 main.go:141] libmachine: Parsing certificate...
	I1216 03:31:01.051229    8756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:31:01.051289    8756 main.go:141] libmachine: Decoding PEM data...
	I1216 03:31:01.051306    8756 main.go:141] libmachine: Parsing certificate...
	I1216 03:31:01.052318    8756 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:31:01.226501    8756 main.go:141] libmachine: Creating SSH key...
	I1216 03:31:01.414362    8756 main.go:141] libmachine: Creating Disk image...
	I1216 03:31:01.414370    8756 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:31:01.414628    8756 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:31:01.425129    8756 main.go:141] libmachine: STDOUT: 
	I1216 03:31:01.425150    8756 main.go:141] libmachine: STDERR: 
	I1216 03:31:01.425209    8756 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2 +20000M
	I1216 03:31:01.433615    8756 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:31:01.433630    8756 main.go:141] libmachine: STDERR: 
	I1216 03:31:01.433639    8756 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:31:01.433643    8756 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:31:01.433654    8756 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:31:01.433693    8756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:83:a5:7d:66:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:31:01.435497    8756 main.go:141] libmachine: STDOUT: 
	I1216 03:31:01.435516    8756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:31:01.435529    8756 client.go:171] duration metric: took 384.635375ms to LocalClient.Create
	I1216 03:31:03.437684    8756 start.go:128] duration metric: took 2.4542115s to createHost
	I1216 03:31:03.437740    8756 start.go:83] releasing machines lock for "multinode-791000", held for 2.454735208s
	W1216 03:31:03.438150    8756 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-791000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-791000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:31:03.454816    8756 out.go:201] 
	W1216 03:31:03.459829    8756 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:31:03.459876    8756 out.go:270] * 
	* 
	W1216 03:31:03.462619    8756 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:31:03.474718    8756 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-791000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (73.111208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (79.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (67.140125ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-791000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- rollout status deployment/busybox: exit status 1 (62.328708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.403167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:03.752804    7256 retry.go:31] will retry after 621.749205ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.459708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:04.486353    7256 retry.go:31] will retry after 1.719679851s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.290625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:06.317716    7256 retry.go:31] will retry after 3.078191644s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.117458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:09.508371    7256 retry.go:31] will retry after 3.447893435s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.698959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:13.068279    7256 retry.go:31] will retry after 5.207016344s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.443583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:18.386075    7256 retry.go:31] will retry after 8.741522858s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.540917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:27.236430    7256 retry.go:31] will retry after 9.003265239s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.987458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:36.350953    7256 retry.go:31] will retry after 21.447872641s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.706125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 03:31:57.910385    7256 retry.go:31] will retry after 24.480429714s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.72475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (65.06375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- exec  -- nslookup kubernetes.io: exit status 1 (64.15325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.341875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.333458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (35.362584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (79.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.054ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (33.676375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-791000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-791000 -v 3 --alsologtostderr: exit status 83 (51.827958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-791000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-791000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:22.915626    8869 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:22.915806    8869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:22.915810    8869 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:22.915812    8869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:22.915959    8869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:22.916167    8869 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:22.916402    8869 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:22.925638    8869 out.go:177] * The control-plane node multinode-791000 host is not running: state=Stopped
	I1216 03:32:22.929687    8869 out.go:177]   To start a cluster, run: "minikube start -p multinode-791000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-791000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (34.748792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-791000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-791000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.804208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-791000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-791000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-791000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (34.308625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-791000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-791000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-791000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-791000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (34.206958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status --output json --alsologtostderr: exit status 7 (34.602541ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-791000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:23.152976    8881 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:23.153164    8881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:23.153168    8881 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:23.153171    8881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:23.153300    8881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:23.153428    8881 out.go:352] Setting JSON to true
	I1216 03:32:23.153437    8881 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:23.153498    8881 notify.go:220] Checking for updates...
	I1216 03:32:23.153651    8881 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:23.153658    8881 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:23.153910    8881 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:23.153914    8881 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:23.153916    8881 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-791000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (35.89625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 node stop m03: exit status 85 (51.955542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-791000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status: exit status 7 (34.90075ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr: exit status 7 (34.849083ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:23.311531    8889 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:23.311714    8889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:23.311717    8889 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:23.311719    8889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:23.311851    8889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:23.311967    8889 out.go:352] Setting JSON to false
	I1216 03:32:23.311977    8889 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:23.312043    8889 notify.go:220] Checking for updates...
	I1216 03:32:23.312175    8889 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:23.312182    8889 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:23.312439    8889 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:23.312443    8889 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:23.312445    8889 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr": multinode-791000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (35.062583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 node start m03 -v=7 --alsologtostderr: exit status 85 (52.723416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:23.382058    8893 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:23.382450    8893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:23.382453    8893 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:23.382456    8893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:23.382619    8893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:23.382844    8893 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:23.383036    8893 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:23.386739    8893 out.go:201] 
	W1216 03:32:23.390639    8893 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1216 03:32:23.390646    8893 out.go:270] * 
	* 
	W1216 03:32:23.392431    8893 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:32:23.396657    8893 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1216 03:32:23.382058    8893 out.go:345] Setting OutFile to fd 1 ...
I1216 03:32:23.382450    8893 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:32:23.382453    8893 out.go:358] Setting ErrFile to fd 2...
I1216 03:32:23.382456    8893 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 03:32:23.382619    8893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
I1216 03:32:23.382844    8893 mustload.go:65] Loading cluster: multinode-791000
I1216 03:32:23.383036    8893 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1216 03:32:23.386739    8893 out.go:201] 
W1216 03:32:23.390639    8893 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1216 03:32:23.390646    8893 out.go:270] * 
* 
W1216 03:32:23.392431    8893 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 03:32:23.396657    8893 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-791000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (35.408542ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:23.435218    8895 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:23.435411    8895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:23.435414    8895 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:23.435416    8895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:23.435559    8895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:23.435694    8895 out.go:352] Setting JSON to false
	I1216 03:32:23.435707    8895 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:23.435774    8895 notify.go:220] Checking for updates...
	I1216 03:32:23.435927    8895 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:23.435934    8895 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:23.436191    8895 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:23.436194    8895 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:23.436196    8895 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:32:23.437114    7256 retry.go:31] will retry after 902.379758ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (80.892791ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:24.420488    8897 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:24.420701    8897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:24.420706    8897 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:24.420709    8897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:24.420872    8897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:24.421035    8897 out.go:352] Setting JSON to false
	I1216 03:32:24.421048    8897 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:24.421086    8897 notify.go:220] Checking for updates...
	I1216 03:32:24.421325    8897 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:24.421334    8897 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:24.421648    8897 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:24.421653    8897 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:24.421656    8897 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:32:24.422756    7256 retry.go:31] will retry after 1.543476419s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (78.866125ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:26.045235    8899 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:26.045459    8899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:26.045463    8899 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:26.045466    8899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:26.045616    8899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:26.045783    8899 out.go:352] Setting JSON to false
	I1216 03:32:26.045796    8899 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:26.045836    8899 notify.go:220] Checking for updates...
	I1216 03:32:26.046049    8899 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:26.046058    8899 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:26.046398    8899 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:26.046403    8899 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:26.046418    8899 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:32:26.047615    7256 retry.go:31] will retry after 1.678322934s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (78.954041ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:27.804966    8903 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:27.805213    8903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:27.805218    8903 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:27.805229    8903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:27.805398    8903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:27.805596    8903 out.go:352] Setting JSON to false
	I1216 03:32:27.805608    8903 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:27.805677    8903 notify.go:220] Checking for updates...
	I1216 03:32:27.805894    8903 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:27.805903    8903 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:27.806231    8903 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:27.806236    8903 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:27.806239    8903 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:32:27.807309    7256 retry.go:31] will retry after 2.488350457s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (80.102333ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:30.375954    8905 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:30.376186    8905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:30.376190    8905 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:30.376193    8905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:30.376355    8905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:30.376503    8905 out.go:352] Setting JSON to false
	I1216 03:32:30.376516    8905 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:30.376560    8905 notify.go:220] Checking for updates...
	I1216 03:32:30.376763    8905 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:30.376772    8905 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:30.377111    8905 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:30.377116    8905 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:30.377119    8905 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:32:30.378140    7256 retry.go:31] will retry after 5.717886724s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (80.053459ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:36.175190    8909 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:36.175722    8909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:36.175728    8909 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:36.175732    8909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:36.176036    8909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:36.176313    8909 out.go:352] Setting JSON to false
	I1216 03:32:36.176344    8909 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:36.176556    8909 notify.go:220] Checking for updates...
	I1216 03:32:36.176962    8909 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:36.176974    8909 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:36.177293    8909 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:36.177299    8909 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:36.177302    8909 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:32:36.178545    7256 retry.go:31] will retry after 10.952027988s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (79.429417ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:32:47.210041    8917 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:32:47.210298    8917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:47.210303    8917 out.go:358] Setting ErrFile to fd 2...
	I1216 03:32:47.210307    8917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:47.210481    8917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:32:47.210651    8917 out.go:352] Setting JSON to false
	I1216 03:32:47.210662    8917 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:32:47.210698    8917 notify.go:220] Checking for updates...
	I1216 03:32:47.210940    8917 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:32:47.210948    8917 status.go:174] checking status of multinode-791000 ...
	I1216 03:32:47.211277    8917 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:32:47.211282    8917 status.go:384] host is not running, skipping remaining checks
	I1216 03:32:47.211285    8917 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:32:47.212399    7256 retry.go:31] will retry after 13.123522233s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (78.462709ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:33:00.414308    8925 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:33:00.414545    8925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:00.414549    8925 out.go:358] Setting ErrFile to fd 2...
	I1216 03:33:00.414553    8925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:00.414750    8925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:33:00.414929    8925 out.go:352] Setting JSON to false
	I1216 03:33:00.414945    8925 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:33:00.414984    8925 notify.go:220] Checking for updates...
	I1216 03:33:00.415217    8925 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:33:00.415225    8925 status.go:174] checking status of multinode-791000 ...
	I1216 03:33:00.415544    8925 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:33:00.415549    8925 status.go:384] host is not running, skipping remaining checks
	I1216 03:33:00.415552    8925 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 03:33:00.416596    7256 retry.go:31] will retry after 11.985823472s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr: exit status 7 (78.958375ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:33:12.481421    8931 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:33:12.481659    8931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:12.481663    8931 out.go:358] Setting ErrFile to fd 2...
	I1216 03:33:12.481667    8931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:12.481867    8931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:33:12.482024    8931 out.go:352] Setting JSON to false
	I1216 03:33:12.482037    8931 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:33:12.482077    8931 notify.go:220] Checking for updates...
	I1216 03:33:12.482311    8931 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:33:12.482319    8931 status.go:174] checking status of multinode-791000 ...
	I1216 03:33:12.482615    8931 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:33:12.482619    8931 status.go:384] host is not running, skipping remaining checks
	I1216 03:33:12.482621    8931 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-791000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (37.033917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-791000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-791000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-791000: (3.771912334s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.236828792s)

                                                
                                                
-- stdout --
	* [multinode-791000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-791000" primary control-plane node in "multinode-791000" cluster
	* Restarting existing qemu2 VM for "multinode-791000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-791000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:33:16.398305    8957 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:33:16.398506    8957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:16.398510    8957 out.go:358] Setting ErrFile to fd 2...
	I1216 03:33:16.398513    8957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:16.398679    8957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:33:16.400006    8957 out.go:352] Setting JSON to false
	I1216 03:33:16.420477    8957 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5567,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:33:16.420555    8957 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:33:16.426060    8957 out.go:177] * [multinode-791000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:33:16.433810    8957 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:33:16.433890    8957 notify.go:220] Checking for updates...
	I1216 03:33:16.441951    8957 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:33:16.444896    8957 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:33:16.447978    8957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:33:16.451010    8957 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:33:16.453980    8957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:33:16.457290    8957 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:33:16.457335    8957 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:33:16.460953    8957 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:33:16.467931    8957 start.go:297] selected driver: qemu2
	I1216 03:33:16.467937    8957 start.go:901] validating driver "qemu2" against &{Name:multinode-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:33:16.468007    8957 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:33:16.470830    8957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:33:16.470905    8957 cni.go:84] Creating CNI manager for ""
	I1216 03:33:16.470931    8957 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1216 03:33:16.470974    8957 start.go:340] cluster config:
	{Name:multinode-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-791000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:33:16.475697    8957 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:16.483920    8957 out.go:177] * Starting "multinode-791000" primary control-plane node in "multinode-791000" cluster
	I1216 03:33:16.487980    8957 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:33:16.487996    8957 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:33:16.488007    8957 cache.go:56] Caching tarball of preloaded images
	I1216 03:33:16.488084    8957 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:33:16.488096    8957 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:33:16.488154    8957 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/multinode-791000/config.json ...
	I1216 03:33:16.488608    8957 start.go:360] acquireMachinesLock for multinode-791000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:33:16.488657    8957 start.go:364] duration metric: took 42.541µs to acquireMachinesLock for "multinode-791000"
	I1216 03:33:16.488666    8957 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:33:16.488671    8957 fix.go:54] fixHost starting: 
	I1216 03:33:16.488796    8957 fix.go:112] recreateIfNeeded on multinode-791000: state=Stopped err=<nil>
	W1216 03:33:16.488804    8957 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:33:16.495947    8957 out.go:177] * Restarting existing qemu2 VM for "multinode-791000" ...
	I1216 03:33:16.499957    8957 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:33:16.500007    8957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:83:a5:7d:66:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:33:16.502371    8957 main.go:141] libmachine: STDOUT: 
	I1216 03:33:16.502393    8957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:33:16.502425    8957 fix.go:56] duration metric: took 13.753667ms for fixHost
	I1216 03:33:16.502431    8957 start.go:83] releasing machines lock for "multinode-791000", held for 13.769625ms
	W1216 03:33:16.502437    8957 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:33:16.502474    8957 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:33:16.502478    8957 start.go:729] Will try again in 5 seconds ...
	I1216 03:33:21.504648    8957 start.go:360] acquireMachinesLock for multinode-791000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:33:21.505227    8957 start.go:364] duration metric: took 451.625µs to acquireMachinesLock for "multinode-791000"
	I1216 03:33:21.505378    8957 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:33:21.505404    8957 fix.go:54] fixHost starting: 
	I1216 03:33:21.506170    8957 fix.go:112] recreateIfNeeded on multinode-791000: state=Stopped err=<nil>
	W1216 03:33:21.506202    8957 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:33:21.514664    8957 out.go:177] * Restarting existing qemu2 VM for "multinode-791000" ...
	I1216 03:33:21.517738    8957 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:33:21.517971    8957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:83:a5:7d:66:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:33:21.528404    8957 main.go:141] libmachine: STDOUT: 
	I1216 03:33:21.528461    8957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:33:21.528557    8957 fix.go:56] duration metric: took 23.159875ms for fixHost
	I1216 03:33:21.528582    8957 start.go:83] releasing machines lock for "multinode-791000", held for 23.333042ms
	W1216 03:33:21.528728    8957 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-791000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-791000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:33:21.537744    8957 out.go:201] 
	W1216 03:33:21.541738    8957 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:33:21.541763    8957 out.go:270] * 
	* 
	W1216 03:33:21.544382    8957 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:33:21.551721    8957 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-791000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-791000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (36.759792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 node delete m03: exit status 83 (44.695166ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-791000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-791000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-791000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr: exit status 7 (35.295833ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:33:21.756079    8973 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:33:21.756263    8973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:21.756266    8973 out.go:358] Setting ErrFile to fd 2...
	I1216 03:33:21.756268    8973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:21.756394    8973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:33:21.756511    8973 out.go:352] Setting JSON to false
	I1216 03:33:21.756531    8973 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:33:21.756572    8973 notify.go:220] Checking for updates...
	I1216 03:33:21.756739    8973 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:33:21.756745    8973 status.go:174] checking status of multinode-791000 ...
	I1216 03:33:21.756996    8973 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:33:21.756999    8973 status.go:384] host is not running, skipping remaining checks
	I1216 03:33:21.757001    8973 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (35.271917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-791000 stop: (3.356602417s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status: exit status 7 (72.57575ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr: exit status 7 (36.935417ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:33:25.258131    8999 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:33:25.258312    8999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:25.258315    8999 out.go:358] Setting ErrFile to fd 2...
	I1216 03:33:25.258318    8999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:25.258440    8999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:33:25.258570    8999 out.go:352] Setting JSON to false
	I1216 03:33:25.258578    8999 mustload.go:65] Loading cluster: multinode-791000
	I1216 03:33:25.258647    8999 notify.go:220] Checking for updates...
	I1216 03:33:25.258773    8999 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:33:25.258780    8999 status.go:174] checking status of multinode-791000 ...
	I1216 03:33:25.259029    8999 status.go:371] multinode-791000 host status = "Stopped" (err=<nil>)
	I1216 03:33:25.259032    8999 status.go:384] host is not running, skipping remaining checks
	I1216 03:33:25.259034    8999 status.go:176] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr": multinode-791000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-791000 status --alsologtostderr": multinode-791000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (34.5845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.192543792s)

                                                
                                                
-- stdout --
	* [multinode-791000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-791000" primary control-plane node in "multinode-791000" cluster
	* Restarting existing qemu2 VM for "multinode-791000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-791000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:33:25.327627    9003 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:33:25.327799    9003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:25.327803    9003 out.go:358] Setting ErrFile to fd 2...
	I1216 03:33:25.327805    9003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:25.327945    9003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:33:25.329047    9003 out.go:352] Setting JSON to false
	I1216 03:33:25.346691    9003 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5576,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:33:25.346777    9003 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:33:25.350828    9003 out.go:177] * [multinode-791000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:33:25.358855    9003 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:33:25.358946    9003 notify.go:220] Checking for updates...
	I1216 03:33:25.366817    9003 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:33:25.369785    9003 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:33:25.373758    9003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:33:25.376837    9003 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:33:25.379696    9003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:33:25.383133    9003 config.go:182] Loaded profile config "multinode-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:33:25.383437    9003 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:33:25.387778    9003 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:33:25.394804    9003 start.go:297] selected driver: qemu2
	I1216 03:33:25.394811    9003 start.go:901] validating driver "qemu2" against &{Name:multinode-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:33:25.394905    9003 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:33:25.397648    9003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:33:25.397670    9003 cni.go:84] Creating CNI manager for ""
	I1216 03:33:25.397693    9003 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1216 03:33:25.397740    9003 start.go:340] cluster config:
	{Name:multinode-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-791000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:33:25.402293    9003 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:25.410810    9003 out.go:177] * Starting "multinode-791000" primary control-plane node in "multinode-791000" cluster
	I1216 03:33:25.414780    9003 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:33:25.414798    9003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:33:25.414810    9003 cache.go:56] Caching tarball of preloaded images
	I1216 03:33:25.414879    9003 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:33:25.414884    9003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:33:25.414944    9003 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/multinode-791000/config.json ...
	I1216 03:33:25.415395    9003 start.go:360] acquireMachinesLock for multinode-791000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:33:25.415426    9003 start.go:364] duration metric: took 25.208µs to acquireMachinesLock for "multinode-791000"
	I1216 03:33:25.415435    9003 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:33:25.415440    9003 fix.go:54] fixHost starting: 
	I1216 03:33:25.415563    9003 fix.go:112] recreateIfNeeded on multinode-791000: state=Stopped err=<nil>
	W1216 03:33:25.415572    9003 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:33:25.423779    9003 out.go:177] * Restarting existing qemu2 VM for "multinode-791000" ...
	I1216 03:33:25.427793    9003 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:33:25.427832    9003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:83:a5:7d:66:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:33:25.430163    9003 main.go:141] libmachine: STDOUT: 
	I1216 03:33:25.430183    9003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:33:25.430215    9003 fix.go:56] duration metric: took 14.773334ms for fixHost
	I1216 03:33:25.430220    9003 start.go:83] releasing machines lock for "multinode-791000", held for 14.789084ms
	W1216 03:33:25.430225    9003 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:33:25.430269    9003 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:33:25.430274    9003 start.go:729] Will try again in 5 seconds ...
	I1216 03:33:30.432345    9003 start.go:360] acquireMachinesLock for multinode-791000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:33:30.432690    9003 start.go:364] duration metric: took 272.792µs to acquireMachinesLock for "multinode-791000"
	I1216 03:33:30.432801    9003 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:33:30.432823    9003 fix.go:54] fixHost starting: 
	I1216 03:33:30.433510    9003 fix.go:112] recreateIfNeeded on multinode-791000: state=Stopped err=<nil>
	W1216 03:33:30.433549    9003 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:33:30.437974    9003 out.go:177] * Restarting existing qemu2 VM for "multinode-791000" ...
	I1216 03:33:30.441962    9003 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:33:30.442150    9003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:83:a5:7d:66:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/multinode-791000/disk.qcow2
	I1216 03:33:30.451979    9003 main.go:141] libmachine: STDOUT: 
	I1216 03:33:30.452038    9003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:33:30.452113    9003 fix.go:56] duration metric: took 19.293959ms for fixHost
	I1216 03:33:30.452126    9003 start.go:83] releasing machines lock for "multinode-791000", held for 19.416ms
	W1216 03:33:30.452316    9003 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-791000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-791000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:33:30.459922    9003 out.go:201] 
	W1216 03:33:30.463933    9003 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:33:30.463952    9003 out.go:270] * 
	* 
	W1216 03:33:30.465773    9003 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:33:30.473882    9003 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (74.045042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-791000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-791000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-791000-m01 --driver=qemu2 : exit status 80 (9.997403417s)

                                                
                                                
-- stdout --
	* [multinode-791000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-791000-m01" primary control-plane node in "multinode-791000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-791000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-791000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-791000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-791000-m02 --driver=qemu2 : exit status 80 (10.208991792s)

                                                
                                                
-- stdout --
	* [multinode-791000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-791000-m02" primary control-plane node in "multinode-791000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-791000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-791000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-791000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-791000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-791000: exit status 83 (88.404417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-791000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-791000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-791000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-791000 -n multinode-791000: exit status 7 (35.1725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.45s)

                                                
                                    
x
+
TestPreload (10.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-103000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-103000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.077259333s)

                                                
                                                
-- stdout --
	* [test-preload-103000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-103000" primary control-plane node in "test-preload-103000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-103000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:33:51.167709    9059 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:33:51.167869    9059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:51.167873    9059 out.go:358] Setting ErrFile to fd 2...
	I1216 03:33:51.167875    9059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:51.168013    9059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:33:51.169214    9059 out.go:352] Setting JSON to false
	I1216 03:33:51.187057    9059 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5602,"bootTime":1734343229,"procs":567,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:33:51.187148    9059 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:33:51.193964    9059 out.go:177] * [test-preload-103000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:33:51.200981    9059 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:33:51.201034    9059 notify.go:220] Checking for updates...
	I1216 03:33:51.209873    9059 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:33:51.224883    9059 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:33:51.228940    9059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:33:51.230402    9059 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:33:51.233847    9059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:33:51.237306    9059 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:33:51.237351    9059 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:33:51.241781    9059 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:33:51.248861    9059 start.go:297] selected driver: qemu2
	I1216 03:33:51.248866    9059 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:33:51.248871    9059 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:33:51.251427    9059 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:33:51.254936    9059 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:33:51.258891    9059 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:33:51.258906    9059 cni.go:84] Creating CNI manager for ""
	I1216 03:33:51.258926    9059 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:33:51.258931    9059 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:33:51.258960    9059 start.go:340] cluster config:
	{Name:test-preload-103000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:33:51.263699    9059 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.271848    9059 out.go:177] * Starting "test-preload-103000" primary control-plane node in "test-preload-103000" cluster
	I1216 03:33:51.275917    9059 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1216 03:33:51.275990    9059 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/test-preload-103000/config.json ...
	I1216 03:33:51.276009    9059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/test-preload-103000/config.json: {Name:mk2d41a740b6402bde48aaaf1f4b4f4ef9e3f8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:33:51.276020    9059 cache.go:107] acquiring lock: {Name:mk43146b652c427a15299fa1b6267909fa626be4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.276019    9059 cache.go:107] acquiring lock: {Name:mk1dff3b5db309adec1d9288b3368e36a05bdad2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.276029    9059 cache.go:107] acquiring lock: {Name:mkabc349fda91e59627d8ac1ea1d0fcb84aa1502 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.276051    9059 cache.go:107] acquiring lock: {Name:mk1373b2e0475cd9b7b8d139cd59c619a6fb0d47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.276065    9059 cache.go:107] acquiring lock: {Name:mk8111d1bbfc03b4d936ae8f0d2c0c6f917fae6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.276105    9059 cache.go:107] acquiring lock: {Name:mkec16689740f73dced165480381cbc87a46bcc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.276139    9059 cache.go:107] acquiring lock: {Name:mkeb8d7d2b68eb92be69dcd66562312298a167d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.276171    9059 cache.go:107] acquiring lock: {Name:mk4f47f89203f0cb0e0e353e48ece2cf5aa41e32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:33:51.276379    9059 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 03:33:51.276401    9059 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 03:33:51.276445    9059 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 03:33:51.276565    9059 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 03:33:51.276587    9059 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 03:33:51.276652    9059 start.go:360] acquireMachinesLock for test-preload-103000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:33:51.276708    9059 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:33:51.276807    9059 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 03:33:51.276837    9059 start.go:364] duration metric: took 169.416µs to acquireMachinesLock for "test-preload-103000"
	I1216 03:33:51.276841    9059 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:33:51.276852    9059 start.go:93] Provisioning new machine with config: &{Name:test-preload-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:33:51.276937    9059 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:33:51.284957    9059 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:33:51.288858    9059 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 03:33:51.289027    9059 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 03:33:51.289408    9059 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 03:33:51.289474    9059 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:33:51.289485    9059 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 03:33:51.289557    9059 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 03:33:51.289571    9059 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:33:51.289725    9059 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 03:33:51.302004    9059 start.go:159] libmachine.API.Create for "test-preload-103000" (driver="qemu2")
	I1216 03:33:51.302026    9059 client.go:168] LocalClient.Create starting
	I1216 03:33:51.302113    9059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:33:51.302147    9059 main.go:141] libmachine: Decoding PEM data...
	I1216 03:33:51.302167    9059 main.go:141] libmachine: Parsing certificate...
	I1216 03:33:51.302201    9059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:33:51.302233    9059 main.go:141] libmachine: Decoding PEM data...
	I1216 03:33:51.302239    9059 main.go:141] libmachine: Parsing certificate...
	I1216 03:33:51.302606    9059 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:33:51.473675    9059 main.go:141] libmachine: Creating SSH key...
	I1216 03:33:51.689406    9059 main.go:141] libmachine: Creating Disk image...
	I1216 03:33:51.689423    9059 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:33:51.689652    9059 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2
	I1216 03:33:51.699847    9059 main.go:141] libmachine: STDOUT: 
	I1216 03:33:51.699866    9059 main.go:141] libmachine: STDERR: 
	I1216 03:33:51.699916    9059 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2 +20000M
	I1216 03:33:51.710209    9059 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:33:51.710223    9059 main.go:141] libmachine: STDERR: 
	I1216 03:33:51.710239    9059 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2
	I1216 03:33:51.710242    9059 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:33:51.710256    9059 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:33:51.710288    9059 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:fc:03:e0:0b:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2
	I1216 03:33:51.712563    9059 main.go:141] libmachine: STDOUT: 
	I1216 03:33:51.712580    9059 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:33:51.712600    9059 client.go:171] duration metric: took 410.573875ms to LocalClient.Create
	I1216 03:33:51.734171    9059 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1216 03:33:51.758393    9059 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1216 03:33:51.789208    9059 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1216 03:33:51.873038    9059 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1216 03:33:51.992769    9059 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1216 03:33:52.033568    9059 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1216 03:33:52.033599    9059 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1216 03:33:52.106395    9059 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1216 03:33:52.272220    9059 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1216 03:33:52.272269    9059 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 996.225167ms
	I1216 03:33:52.272313    9059 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1216 03:33:52.444859    9059 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 03:33:52.444950    9059 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 03:33:52.979196    9059 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 03:33:52.979244    9059 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.70324675s
	I1216 03:33:52.979278    9059 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 03:33:53.712867    9059 start.go:128] duration metric: took 2.435926792s to createHost
	I1216 03:33:53.712931    9059 start.go:83] releasing machines lock for "test-preload-103000", held for 2.436111458s
	W1216 03:33:53.713004    9059 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:33:53.730441    9059 out.go:177] * Deleting "test-preload-103000" in qemu2 ...
	W1216 03:33:53.763952    9059 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:33:53.763978    9059 start.go:729] Will try again in 5 seconds ...
	I1216 03:33:54.324329    9059 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1216 03:33:54.324379    9059 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.048348333s
	I1216 03:33:54.324405    9059 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1216 03:33:56.205075    9059 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1216 03:33:56.205140    9059 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.929189875s
	I1216 03:33:56.205173    9059 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1216 03:33:57.274053    9059 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1216 03:33:57.274124    9059 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.99819125s
	I1216 03:33:57.274152    9059 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1216 03:33:57.375695    9059 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1216 03:33:57.375758    9059 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.099766333s
	I1216 03:33:57.375783    9059 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1216 03:33:57.396017    9059 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1216 03:33:57.396060    9059 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.120044791s
	I1216 03:33:57.396088    9059 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1216 03:33:58.764161    9059 start.go:360] acquireMachinesLock for test-preload-103000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:33:58.764669    9059 start.go:364] duration metric: took 427.125µs to acquireMachinesLock for "test-preload-103000"
	I1216 03:33:58.764780    9059 start.go:93] Provisioning new machine with config: &{Name:test-preload-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:33:58.765023    9059 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:33:58.772574    9059 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:33:58.820701    9059 start.go:159] libmachine.API.Create for "test-preload-103000" (driver="qemu2")
	I1216 03:33:58.820746    9059 client.go:168] LocalClient.Create starting
	I1216 03:33:58.820887    9059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:33:58.821038    9059 main.go:141] libmachine: Decoding PEM data...
	I1216 03:33:58.821077    9059 main.go:141] libmachine: Parsing certificate...
	I1216 03:33:58.821136    9059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:33:58.821194    9059 main.go:141] libmachine: Decoding PEM data...
	I1216 03:33:58.821210    9059 main.go:141] libmachine: Parsing certificate...
	I1216 03:33:58.821754    9059 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:33:58.993023    9059 main.go:141] libmachine: Creating SSH key...
	I1216 03:33:59.144743    9059 main.go:141] libmachine: Creating Disk image...
	I1216 03:33:59.144755    9059 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:33:59.144996    9059 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2
	I1216 03:33:59.155429    9059 main.go:141] libmachine: STDOUT: 
	I1216 03:33:59.155463    9059 main.go:141] libmachine: STDERR: 
	I1216 03:33:59.155524    9059 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2 +20000M
	I1216 03:33:59.164245    9059 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:33:59.164261    9059 main.go:141] libmachine: STDERR: 
	I1216 03:33:59.164276    9059 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2
	I1216 03:33:59.164280    9059 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:33:59.164290    9059 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:33:59.164327    9059 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:3b:62:97:e6:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/test-preload-103000/disk.qcow2
	I1216 03:33:59.166257    9059 main.go:141] libmachine: STDOUT: 
	I1216 03:33:59.166284    9059 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:33:59.166300    9059 client.go:171] duration metric: took 345.552666ms to LocalClient.Create
	I1216 03:34:00.279988    9059 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1216 03:34:00.280044    9059 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.00404275s
	I1216 03:34:00.280071    9059 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1216 03:34:00.280124    9059 cache.go:87] Successfully saved all images to host disk.
	I1216 03:34:01.168493    9059 start.go:128] duration metric: took 2.403477333s to createHost
	I1216 03:34:01.168675    9059 start.go:83] releasing machines lock for "test-preload-103000", held for 2.404012958s
	W1216 03:34:01.168895    9059 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-103000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-103000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:01.181929    9059 out.go:201] 
	W1216 03:34:01.186888    9059 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:34:01.186915    9059 out.go:270] * 
	* 
	W1216 03:34:01.189455    9059 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:34:01.196858    9059 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-103000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-16 03:34:01.215691 -0800 PST m=+630.990561251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-103000 -n test-preload-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-103000 -n test-preload-103000: exit status 7 (71.176542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-103000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-103000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-103000
--- FAIL: TestPreload (10.24s)

                                                
                                    
x
+
TestScheduledStopUnix (10.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-079000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-079000 --memory=2048 --driver=qemu2 : exit status 80 (9.915708541s)

                                                
                                                
-- stdout --
	* [scheduled-stop-079000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-079000" primary control-plane node in "scheduled-stop-079000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-079000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-079000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-079000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-079000" primary control-plane node in "scheduled-stop-079000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-079000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-079000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-16 03:34:11.290203 -0800 PST m=+641.065205376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-079000 -n scheduled-stop-079000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-079000 -n scheduled-stop-079000: exit status 7 (74.8325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-079000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-079000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-079000
--- FAIL: TestScheduledStopUnix (10.08s)

                                                
                                    
x
+
TestSkaffold (12.27s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3620714545 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3620714545 version: (1.029953292s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-912000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-912000 --memory=2600 --driver=qemu2 : exit status 80 (9.8790215s)

                                                
                                                
-- stdout --
	* [skaffold-912000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-912000" primary control-plane node in "skaffold-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-912000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-912000" primary control-plane node in "skaffold-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-16 03:34:23.567472 -0800 PST m=+653.342635335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-912000 -n skaffold-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-912000 -n skaffold-912000: exit status 7 (68.92175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-912000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-912000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-912000
--- FAIL: TestSkaffold (12.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (629.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3632942445 start -p running-upgrade-993000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3632942445 start -p running-upgrade-993000 --memory=2200 --vm-driver=qemu2 : (59.557532291s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-993000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-993000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m55.230113417s)

                                                
                                                
-- stdout --
	* [running-upgrade-993000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-993000" primary control-plane node in "running-upgrade-993000" cluster
	* Updating the running qemu2 "running-upgrade-993000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:35:47.069559    9391 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:35:47.069730    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:35:47.069736    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:35:47.069739    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:35:47.069877    9391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:35:47.071005    9391 out.go:352] Setting JSON to false
	I1216 03:35:47.089548    9391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5718,"bootTime":1734343229,"procs":571,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:35:47.089627    9391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:35:47.094269    9391 out.go:177] * [running-upgrade-993000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:35:47.101257    9391 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:35:47.101299    9391 notify.go:220] Checking for updates...
	I1216 03:35:47.109272    9391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:35:47.113286    9391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:35:47.116319    9391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:35:47.119327    9391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:35:47.122287    9391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:35:47.125517    9391 config.go:182] Loaded profile config "running-upgrade-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 03:35:47.128381    9391 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1216 03:35:47.131316    9391 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:35:47.134338    9391 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:35:47.141220    9391 start.go:297] selected driver: qemu2
	I1216 03:35:47.141226    9391 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61111 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 03:35:47.141268    9391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:35:47.143638    9391 cni.go:84] Creating CNI manager for ""
	I1216 03:35:47.143669    9391 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:35:47.143701    9391 start.go:340] cluster config:
	{Name:running-upgrade-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61111 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 03:35:47.143749    9391 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:35:47.152203    9391 out.go:177] * Starting "running-upgrade-993000" primary control-plane node in "running-upgrade-993000" cluster
	I1216 03:35:47.156251    9391 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 03:35:47.156274    9391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1216 03:35:47.156280    9391 cache.go:56] Caching tarball of preloaded images
	I1216 03:35:47.156353    9391 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:35:47.156359    9391 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1216 03:35:47.156413    9391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/config.json ...
	I1216 03:35:47.156729    9391 start.go:360] acquireMachinesLock for running-upgrade-993000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:36:00.236486    9391 start.go:364] duration metric: took 13.080047s to acquireMachinesLock for "running-upgrade-993000"
	I1216 03:36:00.236516    9391 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:36:00.236529    9391 fix.go:54] fixHost starting: 
	I1216 03:36:00.237417    9391 fix.go:112] recreateIfNeeded on running-upgrade-993000: state=Running err=<nil>
	W1216 03:36:00.237435    9391 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:36:00.244581    9391 out.go:177] * Updating the running qemu2 "running-upgrade-993000" VM ...
	I1216 03:36:00.248505    9391 machine.go:93] provisionDockerMachine start ...
	I1216 03:36:00.248597    9391 main.go:141] libmachine: Using SSH client type: native
	I1216 03:36:00.248746    9391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10096b1b0] 0x10096d9f0 <nil>  [] 0s} localhost 61015 <nil> <nil>}
	I1216 03:36:00.248751    9391 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 03:36:00.307035    9391 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-993000
	
	I1216 03:36:00.307047    9391 buildroot.go:166] provisioning hostname "running-upgrade-993000"
	I1216 03:36:00.307104    9391 main.go:141] libmachine: Using SSH client type: native
	I1216 03:36:00.307220    9391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10096b1b0] 0x10096d9f0 <nil>  [] 0s} localhost 61015 <nil> <nil>}
	I1216 03:36:00.307226    9391 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-993000 && echo "running-upgrade-993000" | sudo tee /etc/hostname
	I1216 03:36:00.368102    9391 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-993000
	
	I1216 03:36:00.368179    9391 main.go:141] libmachine: Using SSH client type: native
	I1216 03:36:00.368381    9391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10096b1b0] 0x10096d9f0 <nil>  [] 0s} localhost 61015 <nil> <nil>}
	I1216 03:36:00.368391    9391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-993000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-993000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-993000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:36:00.423756    9391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:36:00.423773    9391 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20107-6737/.minikube CaCertPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20107-6737/.minikube}
	I1216 03:36:00.423785    9391 buildroot.go:174] setting up certificates
	I1216 03:36:00.423789    9391 provision.go:84] configureAuth start
	I1216 03:36:00.423801    9391 provision.go:143] copyHostCerts
	I1216 03:36:00.423874    9391 exec_runner.go:144] found /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.pem, removing ...
	I1216 03:36:00.423882    9391 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.pem
	I1216 03:36:00.424010    9391 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.pem (1082 bytes)
	I1216 03:36:00.424205    9391 exec_runner.go:144] found /Users/jenkins/minikube-integration/20107-6737/.minikube/cert.pem, removing ...
	I1216 03:36:00.424210    9391 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20107-6737/.minikube/cert.pem
	I1216 03:36:00.424251    9391 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20107-6737/.minikube/cert.pem (1123 bytes)
	I1216 03:36:00.424928    9391 exec_runner.go:144] found /Users/jenkins/minikube-integration/20107-6737/.minikube/key.pem, removing ...
	I1216 03:36:00.424933    9391 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20107-6737/.minikube/key.pem
	I1216 03:36:00.424986    9391 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20107-6737/.minikube/key.pem (1679 bytes)
	I1216 03:36:00.425102    9391 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-993000 san=[127.0.0.1 localhost minikube running-upgrade-993000]
	I1216 03:36:00.632661    9391 provision.go:177] copyRemoteCerts
	I1216 03:36:00.632717    9391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:36:00.632728    9391 sshutil.go:53] new ssh client: &{IP:localhost Port:61015 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/running-upgrade-993000/id_rsa Username:docker}
	I1216 03:36:00.664720    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:36:00.671455    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 03:36:00.678946    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 03:36:00.686422    9391 provision.go:87] duration metric: took 262.620084ms to configureAuth
	I1216 03:36:00.686432    9391 buildroot.go:189] setting minikube options for container-runtime
	I1216 03:36:00.686549    9391 config.go:182] Loaded profile config "running-upgrade-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 03:36:00.686607    9391 main.go:141] libmachine: Using SSH client type: native
	I1216 03:36:00.686691    9391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10096b1b0] 0x10096d9f0 <nil>  [] 0s} localhost 61015 <nil> <nil>}
	I1216 03:36:00.686696    9391 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 03:36:00.744181    9391 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1216 03:36:00.744191    9391 buildroot.go:70] root file system type: tmpfs
	I1216 03:36:00.744240    9391 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 03:36:00.744305    9391 main.go:141] libmachine: Using SSH client type: native
	I1216 03:36:00.744416    9391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10096b1b0] 0x10096d9f0 <nil>  [] 0s} localhost 61015 <nil> <nil>}
	I1216 03:36:00.744448    9391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 03:36:00.805394    9391 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 03:36:00.805478    9391 main.go:141] libmachine: Using SSH client type: native
	I1216 03:36:00.805606    9391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10096b1b0] 0x10096d9f0 <nil>  [] 0s} localhost 61015 <nil> <nil>}
	I1216 03:36:00.805616    9391 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 03:36:00.862104    9391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:36:00.862119    9391 machine.go:96] duration metric: took 613.62075ms to provisionDockerMachine
	I1216 03:36:00.862125    9391 start.go:293] postStartSetup for "running-upgrade-993000" (driver="qemu2")
	I1216 03:36:00.862132    9391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:36:00.862201    9391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:36:00.862217    9391 sshutil.go:53] new ssh client: &{IP:localhost Port:61015 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/running-upgrade-993000/id_rsa Username:docker}
	I1216 03:36:00.890397    9391 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:36:00.891752    9391 info.go:137] Remote host: Buildroot 2021.02.12
	I1216 03:36:00.891758    9391 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20107-6737/.minikube/addons for local assets ...
	I1216 03:36:00.891822    9391 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20107-6737/.minikube/files for local assets ...
	I1216 03:36:00.891913    9391 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/ssl/certs/72562.pem -> 72562.pem in /etc/ssl/certs
	I1216 03:36:00.892010    9391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:36:00.894823    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/ssl/certs/72562.pem --> /etc/ssl/certs/72562.pem (1708 bytes)
	I1216 03:36:00.901472    9391 start.go:296] duration metric: took 39.342625ms for postStartSetup
	I1216 03:36:00.901486    9391 fix.go:56] duration metric: took 664.977834ms for fixHost
	I1216 03:36:00.901527    9391 main.go:141] libmachine: Using SSH client type: native
	I1216 03:36:00.901621    9391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10096b1b0] 0x10096d9f0 <nil>  [] 0s} localhost 61015 <nil> <nil>}
	I1216 03:36:00.901625    9391 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 03:36:00.955792    9391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734348960.721732027
	
	I1216 03:36:00.955800    9391 fix.go:216] guest clock: 1734348960.721732027
	I1216 03:36:00.955803    9391 fix.go:229] Guest: 2024-12-16 03:36:00.721732027 -0800 PST Remote: 2024-12-16 03:36:00.901488 -0800 PST m=+13.856050084 (delta=-179.755973ms)
	I1216 03:36:00.955813    9391 fix.go:200] guest clock delta is within tolerance: -179.755973ms
	I1216 03:36:00.955816    9391 start.go:83] releasing machines lock for "running-upgrade-993000", held for 719.334583ms
	I1216 03:36:00.955893    9391 ssh_runner.go:195] Run: cat /version.json
	I1216 03:36:00.955905    9391 sshutil.go:53] new ssh client: &{IP:localhost Port:61015 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/running-upgrade-993000/id_rsa Username:docker}
	I1216 03:36:00.955893    9391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:36:00.955951    9391 sshutil.go:53] new ssh client: &{IP:localhost Port:61015 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/running-upgrade-993000/id_rsa Username:docker}
	W1216 03:36:00.956415    9391 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:61255->127.0.0.1:61015: write: connection reset by peer
	I1216 03:36:00.956433    9391 retry.go:31] will retry after 155.244078ms: ssh: handshake failed: write tcp 127.0.0.1:61255->127.0.0.1:61015: write: connection reset by peer
	W1216 03:36:00.982878    9391 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1216 03:36:00.982926    9391 ssh_runner.go:195] Run: systemctl --version
	I1216 03:36:00.984739    9391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:36:00.986443    9391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:36:00.986474    9391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1216 03:36:00.989527    9391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1216 03:36:00.993906    9391 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:36:00.993914    9391 start.go:495] detecting cgroup driver to use...
	I1216 03:36:00.993969    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:36:00.999597    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1216 03:36:01.002936    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 03:36:01.005819    9391 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 03:36:01.005847    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 03:36:01.008828    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 03:36:01.012234    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 03:36:01.015740    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 03:36:01.019116    9391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:36:01.022434    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 03:36:01.025931    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 03:36:01.029096    9391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 03:36:01.031949    9391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:36:01.035147    9391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:36:01.038486    9391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:01.150225    9391 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 03:36:01.156254    9391 start.go:495] detecting cgroup driver to use...
	I1216 03:36:01.156346    9391 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 03:36:01.171648    9391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:36:01.214010    9391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:36:01.235946    9391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:36:01.240845    9391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 03:36:01.245385    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:36:01.251048    9391 ssh_runner.go:195] Run: which cri-dockerd
	I1216 03:36:01.252386    9391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 03:36:01.254918    9391 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1216 03:36:01.259876    9391 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 03:36:01.354170    9391 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 03:36:01.459139    9391 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 03:36:01.459210    9391 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 03:36:01.464392    9391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:01.570458    9391 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 03:36:17.818745    9391 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.248597875s)
	I1216 03:36:17.818814    9391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 03:36:17.823760    9391 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 03:36:17.831375    9391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 03:36:17.836805    9391 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 03:36:17.931950    9391 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 03:36:18.014991    9391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:18.096858    9391 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 03:36:18.103069    9391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 03:36:18.108206    9391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:18.196384    9391 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 03:36:18.239117    9391 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 03:36:18.239208    9391 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 03:36:18.242522    9391 start.go:563] Will wait 60s for crictl version
	I1216 03:36:18.242580    9391 ssh_runner.go:195] Run: which crictl
	I1216 03:36:18.243871    9391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 03:36:18.256139    9391 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1216 03:36:18.256233    9391 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 03:36:18.269269    9391 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 03:36:18.286236    9391 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1216 03:36:18.286320    9391 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1216 03:36:18.287725    9391 kubeadm.go:883] updating cluster {Name:running-upgrade-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61111 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1216 03:36:18.287767    9391 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 03:36:18.287814    9391 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 03:36:18.298540    9391 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 03:36:18.298549    9391 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 03:36:18.298609    9391 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 03:36:18.302408    9391 ssh_runner.go:195] Run: which lz4
	I1216 03:36:18.304006    9391 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 03:36:18.305345    9391 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 03:36:18.305356    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1216 03:36:19.285201    9391 docker.go:653] duration metric: took 981.2715ms to copy over tarball
	I1216 03:36:19.285282    9391 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 03:36:20.684773    9391 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.399504667s)
	I1216 03:36:20.684786    9391 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 03:36:20.701040    9391 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 03:36:20.704381    9391 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1216 03:36:20.709443    9391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:20.796021    9391 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 03:36:22.001364    9391 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.20534875s)
	I1216 03:36:22.001472    9391 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 03:36:22.020523    9391 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 03:36:22.020536    9391 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 03:36:22.020541    9391 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 03:36:22.024929    9391 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:22.026753    9391 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:22.029462    9391 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:22.029564    9391 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:22.032240    9391 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:22.032267    9391 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:22.033386    9391 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:22.033842    9391 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:22.034896    9391 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:22.035245    9391 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:22.036198    9391 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 03:36:22.036541    9391 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:22.037308    9391 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:22.037572    9391 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:22.038403    9391 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 03:36:22.039526    9391 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:22.603935    9391 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:22.615523    9391 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1216 03:36:22.615561    9391 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:22.615645    9391 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:22.621639    9391 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:22.627385    9391 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1216 03:36:22.639593    9391 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1216 03:36:22.639613    9391 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:22.639680    9391 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:22.642428    9391 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:22.654489    9391 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1216 03:36:22.660019    9391 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1216 03:36:22.660041    9391 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:22.660095    9391 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:22.671448    9391 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1216 03:36:22.706187    9391 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:22.716283    9391 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1216 03:36:22.716314    9391 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:22.716388    9391 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:22.726814    9391 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1216 03:36:22.739597    9391 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:22.749677    9391 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1216 03:36:22.749701    9391 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:22.749768    9391 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:22.760515    9391 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1216 03:36:22.760641    9391 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1216 03:36:22.762439    9391 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1216 03:36:22.762458    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1216 03:36:22.839135    9391 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1216 03:36:22.873623    9391 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1216 03:36:22.873646    9391 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1216 03:36:22.873706    9391 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1216 03:36:22.905088    9391 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1216 03:36:22.905216    9391 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W1216 03:36:22.905730    9391 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1216 03:36:22.906080    9391 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:22.919559    9391 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1216 03:36:22.919582    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1216 03:36:22.931316    9391 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1216 03:36:22.931339    9391 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:22.931399    9391 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:22.937261    9391 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1216 03:36:22.937271    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1216 03:36:22.966109    9391 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1216 03:36:22.966252    9391 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1216 03:36:23.023132    9391 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1216 03:36:23.023146    9391 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1216 03:36:23.023161    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1216 03:36:23.086737    9391 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1216 03:36:23.086757    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W1216 03:36:23.166353    9391 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 03:36:23.166469    9391 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:23.269136    9391 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1216 03:36:23.269174    9391 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1216 03:36:23.269173    9391 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1216 03:36:23.269180    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1216 03:36:23.269190    9391 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:23.269251    9391 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:23.318217    9391 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1216 03:36:23.318271    9391 cache_images.go:92] duration metric: took 1.297747291s to LoadCachedImages
	W1216 03:36:23.318313    9391 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1216 03:36:23.318319    9391 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1216 03:36:23.318377    9391 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-993000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:36:23.318447    9391 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 03:36:23.334268    9391 cni.go:84] Creating CNI manager for ""
	I1216 03:36:23.334284    9391 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:36:23.334292    9391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 03:36:23.334306    9391 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-993000 NodeName:running-upgrade-993000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:36:23.334396    9391 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-993000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:36:23.334477    9391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1216 03:36:23.337555    9391 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 03:36:23.337593    9391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:36:23.340488    9391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1216 03:36:23.345677    9391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:36:23.350734    9391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1216 03:36:23.356432    9391 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1216 03:36:23.357967    9391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:23.435337    9391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:36:23.440686    9391 certs.go:68] Setting up /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000 for IP: 10.0.2.15
	I1216 03:36:23.440693    9391 certs.go:194] generating shared ca certs ...
	I1216 03:36:23.440706    9391 certs.go:226] acquiring lock for ca certs: {Name:mk67ed11e928c780dd2836c87a10670f4077fd06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:36:23.440852    9391 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.key
	I1216 03:36:23.442715    9391 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/proxy-client-ca.key
	I1216 03:36:23.442724    9391 certs.go:256] generating profile certs ...
	I1216 03:36:23.443278    9391 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/client.key
	I1216 03:36:23.443312    9391 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.key.c8dd238a
	I1216 03:36:23.443330    9391 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.crt.c8dd238a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1216 03:36:23.498583    9391 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.crt.c8dd238a ...
	I1216 03:36:23.498589    9391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.crt.c8dd238a: {Name:mka406457e7dc99226d9dc0cfb0d80a6415d2b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:36:23.498994    9391 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.key.c8dd238a ...
	I1216 03:36:23.498999    9391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.key.c8dd238a: {Name:mkf074eed3e367f9f54ee511a6d8f304adbfaf7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:36:23.501728    9391 certs.go:381] copying /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.crt.c8dd238a -> /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.crt
	I1216 03:36:23.501976    9391 certs.go:385] copying /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.key.c8dd238a -> /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.key
	I1216 03:36:23.502257    9391 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/proxy-client.key
	I1216 03:36:23.502389    9391 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/7256.pem (1338 bytes)
	W1216 03:36:23.502510    9391 certs.go:480] ignoring /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/7256_empty.pem, impossibly tiny 0 bytes
	I1216 03:36:23.502519    9391 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:36:23.502643    9391 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem (1082 bytes)
	I1216 03:36:23.502742    9391 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:36:23.502795    9391 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/key.pem (1679 bytes)
	I1216 03:36:23.502907    9391 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/ssl/certs/72562.pem (1708 bytes)
	I1216 03:36:23.503363    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:36:23.511084    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 03:36:23.518721    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:36:23.525557    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 03:36:23.532374    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 03:36:23.539344    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 03:36:23.547078    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:36:23.556998    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:36:23.564057    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/ssl/certs/72562.pem --> /usr/share/ca-certificates/72562.pem (1708 bytes)
	I1216 03:36:23.571026    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:36:23.578842    9391 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/7256.pem --> /usr/share/ca-certificates/7256.pem (1338 bytes)
	I1216 03:36:23.586793    9391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:36:23.592352    9391 ssh_runner.go:195] Run: openssl version
	I1216 03:36:23.594442    9391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72562.pem && ln -fs /usr/share/ca-certificates/72562.pem /etc/ssl/certs/72562.pem"
	I1216 03:36:23.598168    9391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72562.pem
	I1216 03:36:23.600358    9391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 11:24 /usr/share/ca-certificates/72562.pem
	I1216 03:36:23.600388    9391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72562.pem
	I1216 03:36:23.602417    9391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72562.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 03:36:23.605660    9391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 03:36:23.608795    9391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:36:23.610428    9391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:36:23.610454    9391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:36:23.612397    9391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 03:36:23.615557    9391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7256.pem && ln -fs /usr/share/ca-certificates/7256.pem /etc/ssl/certs/7256.pem"
	I1216 03:36:23.619145    9391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7256.pem
	I1216 03:36:23.620798    9391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 11:24 /usr/share/ca-certificates/7256.pem
	I1216 03:36:23.620831    9391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7256.pem
	I1216 03:36:23.622786    9391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7256.pem /etc/ssl/certs/51391683.0"
	I1216 03:36:23.626428    9391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:36:23.628103    9391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 03:36:23.630025    9391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 03:36:23.632287    9391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 03:36:23.634416    9391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 03:36:23.636455    9391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 03:36:23.638207    9391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 03:36:23.640055    9391 kubeadm.go:392] StartCluster: {Name:running-upgrade-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61111 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 03:36:23.640129    9391 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 03:36:23.651058    9391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:36:23.655564    9391 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 03:36:23.655579    9391 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 03:36:23.655619    9391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 03:36:23.659262    9391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:36:23.659682    9391 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-993000" does not appear in /Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:36:23.659783    9391 kubeconfig.go:62] /Users/jenkins/minikube-integration/20107-6737/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-993000" cluster setting kubeconfig missing "running-upgrade-993000" context setting]
	I1216 03:36:23.660682    9391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/kubeconfig: {Name:mk517290cc56e622570f1566006f8aa91b83e6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:36:23.661401    9391 kapi.go:59] client config for running-upgrade-993000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/client.key", CAFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023d6f70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:36:23.661864    9391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 03:36:23.664847    9391 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-993000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1216 03:36:23.664852    9391 kubeadm.go:1160] stopping kube-system containers ...
	I1216 03:36:23.664903    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 03:36:23.676199    9391 docker.go:483] Stopping containers: [8308e60826c0 8ab45ec3ddc1 05707f8acd51 5343f97bf1c1 7c63e33e82f6 c54927f4539e 75574c64ad50 a971c09178a8 25b1601e414a 654d6e59cab6 6680fc1036a5 f7f4409fcd35 2e0b617c3402 1faebc5b571b dc2b79d03209 aa2932eee0f3 2e04829e1839 b87b58c2b4d0 536566eb3e68 fa9876fab5eb 64c2e7f695f9]
	I1216 03:36:23.676278    9391 ssh_runner.go:195] Run: docker stop 8308e60826c0 8ab45ec3ddc1 05707f8acd51 5343f97bf1c1 7c63e33e82f6 c54927f4539e 75574c64ad50 a971c09178a8 25b1601e414a 654d6e59cab6 6680fc1036a5 f7f4409fcd35 2e0b617c3402 1faebc5b571b dc2b79d03209 aa2932eee0f3 2e04829e1839 b87b58c2b4d0 536566eb3e68 fa9876fab5eb 64c2e7f695f9
	I1216 03:36:23.688925    9391 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 03:36:23.771864    9391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:36:23.775622    9391 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Dec 16 11:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Dec 16 11:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 16 11:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Dec 16 11:35 /etc/kubernetes/scheduler.conf
	
	I1216 03:36:23.775665    9391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/admin.conf
	I1216 03:36:23.779163    9391 kubeadm.go:163] "https://control-plane.minikube.internal:61111" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:36:23.779197    9391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:36:23.783960    9391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/kubelet.conf
	I1216 03:36:23.787388    9391 kubeadm.go:163] "https://control-plane.minikube.internal:61111" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:36:23.787433    9391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:36:23.790938    9391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/controller-manager.conf
	I1216 03:36:23.794154    9391 kubeadm.go:163] "https://control-plane.minikube.internal:61111" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:36:23.794192    9391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:36:23.797348    9391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/scheduler.conf
	I1216 03:36:23.800038    9391 kubeadm.go:163] "https://control-plane.minikube.internal:61111" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:36:23.800069    9391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:36:23.803173    9391 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:36:23.806809    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:23.844935    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:24.208309    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:24.466197    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:24.489550    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:24.513410    9391 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:36:24.513500    9391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:25.015862    9391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:25.513885    9391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:26.015410    9391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:26.515599    9391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:27.015608    9391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:27.514843    9391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:27.520748    9391 api_server.go:72] duration metric: took 3.0073945s to wait for apiserver process to appear ...
	I1216 03:36:27.520764    9391 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:36:27.520790    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:32.522758    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:32.522785    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:37.522914    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:37.522939    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:42.523209    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:42.523238    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:47.524025    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:47.524088    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:52.524913    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:52.525025    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:57.526301    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:57.526376    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:02.526850    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:02.526959    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:07.528700    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:07.528749    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:12.530761    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:12.530848    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:17.533234    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:17.533278    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:22.535245    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:22.535326    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:27.537057    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:27.537475    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:27.571418    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:37:27.571582    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:27.592132    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:37:27.592236    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:27.609301    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:37:27.609388    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:27.621553    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:37:27.621636    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:27.632430    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:37:27.632516    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:27.643508    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:37:27.643589    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:27.657996    9391 logs.go:282] 0 containers: []
	W1216 03:37:27.658007    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:27.658081    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:27.668658    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:37:27.668674    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:37:27.668678    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:37:27.684082    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:37:27.684092    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:37:27.699674    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:37:27.699684    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:37:27.711574    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:37:27.711585    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:27.724129    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:37:27.724143    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:37:27.742814    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:37:27.742825    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:37:27.755272    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:27.755283    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:27.797119    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:37:27.797127    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:37:27.811138    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:37:27.811153    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:37:27.823400    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:37:27.823410    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:37:27.835854    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:37:27.835864    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:37:27.852680    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:37:27.852693    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:37:27.865117    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:27.865131    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:27.890603    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:37:27.890610    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:37:27.902291    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:27.902302    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:27.907305    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:27.907312    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:28.012240    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:37:28.012249    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:37:28.026289    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:37:28.026298    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:37:28.038629    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:37:28.038640    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:37:30.552149    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:35.554495    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:35.554832    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:35.583102    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:37:35.583253    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:35.604323    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:37:35.604427    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:35.617465    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:37:35.617565    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:35.628537    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:37:35.628614    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:35.639039    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:37:35.639123    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:35.649775    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:37:35.649869    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:35.660599    9391 logs.go:282] 0 containers: []
	W1216 03:37:35.660609    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:35.660677    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:35.671178    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:37:35.671193    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:37:35.671197    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:37:35.682391    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:37:35.682402    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:37:35.693691    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:37:35.693702    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:37:35.705257    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:37:35.705270    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:37:35.720824    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:37:35.720836    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:37:35.733227    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:37:35.733237    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:37:35.751311    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:37:35.751321    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:37:35.766274    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:35.766283    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:35.807852    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:35.807860    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:35.812167    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:37:35.812176    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:37:35.826675    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:37:35.826691    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:37:35.838656    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:37:35.838671    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:37:35.852554    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:37:35.852564    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:35.864583    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:37:35.864596    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:37:35.876139    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:35.876149    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:35.901394    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:35.901401    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:35.938386    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:37:35.938394    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:37:35.952596    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:37:35.952609    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:37:35.965314    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:37:35.965322    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:37:38.479737    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:43.482474    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:43.482752    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:43.507372    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:37:43.507514    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:43.530713    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:37:43.530820    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:43.547136    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:37:43.547218    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:43.557309    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:37:43.557393    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:43.567728    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:37:43.567811    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:43.578596    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:37:43.578690    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:43.589095    9391 logs.go:282] 0 containers: []
	W1216 03:37:43.589106    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:43.589171    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:43.601990    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:37:43.602008    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:37:43.602019    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:37:43.613314    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:37:43.613327    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:43.625791    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:37:43.625801    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:37:43.641359    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:37:43.641375    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:37:43.657422    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:37:43.657442    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:37:43.671384    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:37:43.671399    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:37:43.684531    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:37:43.684546    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:37:43.696294    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:37:43.696305    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:37:43.707732    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:43.707743    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:43.733952    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:43.733966    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:43.773121    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:43.773129    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:43.812990    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:37:43.812999    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:37:43.827574    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:37:43.827589    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:37:43.840281    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:37:43.840291    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:37:43.857881    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:37:43.857894    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:37:43.869181    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:43.869191    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:43.873745    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:37:43.873752    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:37:43.885328    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:37:43.885338    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:37:43.901506    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:37:43.901521    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:37:46.415870    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:51.418042    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:51.418322    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:51.448513    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:37:51.448639    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:51.469804    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:37:51.469894    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:51.483779    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:37:51.483865    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:51.494442    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:37:51.494526    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:51.507153    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:37:51.507258    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:51.517633    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:37:51.517715    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:51.528374    9391 logs.go:282] 0 containers: []
	W1216 03:37:51.528426    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:51.528504    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:51.539460    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:37:51.539479    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:37:51.539484    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:37:51.553492    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:51.553502    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:51.594442    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:51.594451    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:51.598940    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:51.598946    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:51.640793    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:37:51.640804    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:37:51.653815    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:37:51.653825    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:51.668361    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:37:51.668373    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:37:51.679899    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:37:51.679912    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:37:51.691168    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:37:51.691179    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:37:51.709469    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:51.709481    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:51.735146    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:37:51.735154    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:37:51.746761    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:37:51.746774    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:37:51.758479    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:37:51.758493    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:37:51.772811    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:37:51.772824    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:37:51.784534    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:37:51.784543    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:37:51.802128    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:37:51.802141    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:37:51.817405    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:37:51.817416    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:37:51.831356    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:37:51.831369    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:37:51.847115    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:37:51.847125    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:37:54.360789    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:59.363390    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:59.363556    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:59.377271    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:37:59.377369    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:59.389301    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:37:59.389396    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:59.399910    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:37:59.399989    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:59.410370    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:37:59.410441    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:59.422357    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:37:59.422438    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:59.441716    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:37:59.441790    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:59.453330    9391 logs.go:282] 0 containers: []
	W1216 03:37:59.453342    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:59.453411    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:59.463693    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:37:59.463710    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:37:59.463716    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:59.477850    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:59.477862    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:59.482532    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:37:59.482538    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:37:59.493775    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:37:59.493786    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:37:59.512287    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:37:59.512299    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:37:59.533246    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:37:59.533256    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:37:59.544681    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:37:59.544693    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:37:59.556517    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:37:59.556531    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:37:59.571009    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:37:59.571021    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:37:59.586320    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:59.586331    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:59.623726    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:37:59.623735    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:37:59.636587    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:59.636606    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:59.663479    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:37:59.663497    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:37:59.680654    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:37:59.680668    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:37:59.692101    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:59.692112    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:59.734856    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:37:59.734871    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:37:59.749196    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:37:59.749206    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:37:59.760655    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:37:59.760668    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:37:59.776686    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:37:59.776697    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:38:02.290527    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:07.293179    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:07.293359    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:07.308289    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:38:07.308388    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:07.322332    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:38:07.322411    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:07.333721    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:38:07.333802    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:07.344404    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:38:07.344486    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:07.354973    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:38:07.355057    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:07.366565    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:38:07.366643    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:07.376798    9391 logs.go:282] 0 containers: []
	W1216 03:38:07.376810    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:07.376877    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:07.390116    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:38:07.390134    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:38:07.390139    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:38:07.403446    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:38:07.403458    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:38:07.414801    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:07.414813    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:07.441481    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:38:07.441490    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:38:07.453039    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:07.453050    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:07.457544    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:07.457553    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:07.494079    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:38:07.494093    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:38:07.509093    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:38:07.509103    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:38:07.521856    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:38:07.521868    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:38:07.536235    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:38:07.536245    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:38:07.547743    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:38:07.547755    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:38:07.559088    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:38:07.559100    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:07.572235    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:07.572245    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:07.615782    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:38:07.615790    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:38:07.629789    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:38:07.629801    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:38:07.641597    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:38:07.641608    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:38:07.653490    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:38:07.653503    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:38:07.675540    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:38:07.675551    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:38:07.687786    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:38:07.687797    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:38:10.205872    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:15.208103    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:15.208367    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:15.228648    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:38:15.228765    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:15.245659    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:38:15.245745    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:15.257544    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:38:15.257629    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:15.267998    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:38:15.268078    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:15.278941    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:38:15.279026    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:15.289321    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:38:15.289399    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:15.305770    9391 logs.go:282] 0 containers: []
	W1216 03:38:15.305784    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:15.305851    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:15.316464    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:38:15.316478    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:38:15.316484    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:38:15.328917    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:38:15.328930    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:38:15.340139    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:38:15.340150    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:38:15.351652    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:38:15.351662    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:38:15.363555    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:15.363565    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:15.390993    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:15.391001    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:15.427665    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:38:15.427679    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:38:15.441822    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:38:15.441832    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:38:15.453447    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:38:15.453461    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:38:15.465106    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:38:15.465117    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:15.477351    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:15.477365    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:15.481762    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:38:15.481770    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:38:15.494739    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:38:15.494748    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:38:15.513639    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:38:15.513650    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:38:15.525375    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:38:15.525386    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:38:15.541287    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:38:15.541297    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:38:15.559384    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:15.559395    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:15.599039    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:38:15.599051    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:38:15.613234    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:38:15.613246    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:38:18.129307    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:23.131668    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:23.131876    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:23.151182    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:38:23.151299    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:23.166437    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:38:23.166525    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:23.177934    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:38:23.178018    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:23.189224    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:38:23.189307    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:23.200107    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:38:23.200189    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:23.211207    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:38:23.211290    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:23.221691    9391 logs.go:282] 0 containers: []
	W1216 03:38:23.221701    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:23.221766    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:23.232176    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:38:23.232191    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:23.232196    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:23.273567    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:23.273578    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:23.278216    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:38:23.278224    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:38:23.294157    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:23.294168    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:23.320945    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:38:23.320966    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:38:23.358889    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:38:23.358902    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:38:23.382849    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:38:23.382860    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:38:23.394326    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:38:23.394340    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:23.406658    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:23.406674    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:23.444019    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:38:23.444032    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:38:23.458519    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:38:23.458529    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:38:23.472787    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:38:23.472796    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:38:23.488903    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:38:23.488913    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:38:23.501013    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:38:23.501022    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:38:23.512261    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:38:23.512271    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:38:23.524034    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:38:23.524049    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:38:23.536792    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:38:23.536801    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:38:23.550995    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:38:23.551004    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:38:23.562578    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:38:23.562590    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:38:26.075829    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:31.078068    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:31.078306    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:31.096494    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:38:31.096587    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:31.109751    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:38:31.109840    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:31.121701    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:38:31.121792    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:31.132125    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:38:31.132200    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:31.142569    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:38:31.142650    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:31.154233    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:38:31.154321    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:31.172857    9391 logs.go:282] 0 containers: []
	W1216 03:38:31.172868    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:31.172937    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:31.191068    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:38:31.191083    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:38:31.191088    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:38:31.212774    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:38:31.212789    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:31.225088    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:38:31.225100    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:38:31.237248    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:38:31.237262    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:38:31.254720    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:38:31.254728    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:38:31.265876    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:38:31.265888    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:38:31.282662    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:38:31.282673    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:38:31.294096    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:38:31.294109    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:38:31.305752    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:38:31.305768    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:38:31.317144    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:31.317155    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:31.341145    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:38:31.341152    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:38:31.355343    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:38:31.355357    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:38:31.369386    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:31.369395    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:31.404327    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:38:31.404336    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:38:31.425945    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:38:31.425955    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:38:31.437305    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:38:31.437315    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:38:31.449227    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:38:31.449238    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:38:31.461503    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:31.461514    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:31.501010    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:31.501018    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:34.007211    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:39.008294    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:39.008599    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:39.031213    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:38:39.031328    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:39.046729    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:38:39.046824    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:39.059921    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:38:39.060005    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:39.071125    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:38:39.071203    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:39.081951    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:38:39.082035    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:39.092646    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:38:39.092729    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:39.102599    9391 logs.go:282] 0 containers: []
	W1216 03:38:39.102610    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:39.102672    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:39.112892    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:38:39.112909    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:38:39.112914    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:38:39.126617    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:38:39.126630    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:38:39.138228    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:38:39.138242    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:38:39.149924    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:38:39.149936    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:39.162504    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:38:39.162517    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:38:39.175678    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:38:39.175690    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:38:39.188542    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:38:39.188553    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:38:39.205016    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:38:39.205027    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:38:39.217574    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:38:39.217586    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:38:39.235472    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:38:39.235483    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:38:39.247186    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:38:39.247198    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:38:39.260334    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:38:39.260347    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:38:39.271551    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:38:39.271562    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:38:39.283015    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:39.283026    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:39.321957    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:39.321965    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:39.326671    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:39.326677    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:39.361103    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:38:39.361114    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:38:39.375017    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:38:39.375030    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:38:39.391288    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:39.391298    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:41.917139    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:46.919756    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:46.920638    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:46.931933    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:38:46.932014    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:46.942486    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:38:46.942569    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:46.953185    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:38:46.953271    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:46.963733    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:38:46.963802    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:46.974866    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:38:46.974931    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:46.985801    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:38:46.985880    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:46.996050    9391 logs.go:282] 0 containers: []
	W1216 03:38:46.996060    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:46.996120    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:47.006457    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:38:47.006475    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:47.006481    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:47.011159    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:38:47.011165    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:38:47.025820    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:38:47.025832    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:38:47.042093    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:38:47.042104    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:38:47.053880    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:38:47.053890    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:38:47.070588    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:38:47.070599    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:38:47.085327    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:38:47.085341    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:38:47.097986    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:38:47.097998    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:38:47.110599    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:38:47.110610    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:38:47.122131    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:38:47.122140    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:38:47.133608    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:38:47.133619    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:38:47.150689    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:38:47.150699    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:38:47.162567    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:47.162576    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:47.187167    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:38:47.187181    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:38:47.201459    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:38:47.201470    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:38:47.213694    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:47.213707    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:47.253664    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:47.253674    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:47.289119    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:38:47.289132    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:38:47.302758    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:38:47.302767    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:49.816706    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:54.819039    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:54.819339    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:54.843924    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:38:54.844064    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:54.864016    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:38:54.864112    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:54.876468    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:38:54.876549    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:54.887479    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:38:54.887555    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:54.901331    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:38:54.901414    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:54.912038    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:38:54.912127    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:54.922704    9391 logs.go:282] 0 containers: []
	W1216 03:38:54.922716    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:54.922788    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:54.933186    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:38:54.933199    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:38:54.933203    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:38:54.944965    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:38:54.944974    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:38:54.957338    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:38:54.957351    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:38:54.974317    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:54.974326    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:55.015552    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:55.015562    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:55.020075    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:38:55.020083    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:38:55.031773    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:38:55.031787    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:38:55.046517    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:38:55.046531    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:38:55.058319    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:38:55.058334    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:38:55.076681    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:55.076689    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:55.114877    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:38:55.114890    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:38:55.127550    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:38:55.127560    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:38:55.141282    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:38:55.141294    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:38:55.154504    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:38:55.154514    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:38:55.166009    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:38:55.166017    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:38:55.177369    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:55.177379    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:55.201520    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:38:55.201528    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:55.215205    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:38:55.215215    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:38:55.229872    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:38:55.229883    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:38:57.747035    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:02.748865    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:02.749099    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:02.768279    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:39:02.768384    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:02.782806    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:39:02.782901    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:02.798919    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:39:02.799012    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:02.814395    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:39:02.814474    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:02.825671    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:39:02.825754    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:02.836459    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:39:02.836549    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:02.850304    9391 logs.go:282] 0 containers: []
	W1216 03:39:02.850314    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:02.850378    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:02.860589    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:39:02.860608    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:39:02.860612    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:39:02.873283    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:39:02.873293    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:39:02.884847    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:39:02.884860    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:39:02.896015    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:39:02.896026    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:39:02.912178    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:39:02.912190    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:02.924377    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:02.924388    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:02.964906    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:02.964923    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:02.999758    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:39:02.999768    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:39:03.013689    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:39:03.013700    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:39:03.028523    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:39:03.028533    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:39:03.040380    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:39:03.040390    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:39:03.057014    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:39:03.057024    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:39:03.068367    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:03.068377    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:03.073392    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:39:03.073398    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:39:03.085921    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:39:03.085934    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:39:03.098634    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:39:03.098644    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:39:03.109916    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:39:03.109928    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:39:03.121598    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:03.121614    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:03.145211    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:39:03.145220    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:39:05.660586    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:10.663323    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:10.663624    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:10.690486    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:39:10.690607    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:10.706571    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:39:10.706675    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:10.722083    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:39:10.722173    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:10.733225    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:39:10.733303    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:10.750172    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:39:10.750253    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:10.761085    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:39:10.761161    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:10.771274    9391 logs.go:282] 0 containers: []
	W1216 03:39:10.771288    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:10.771356    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:10.781902    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:39:10.781917    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:10.781922    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:10.805916    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:10.805931    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:10.810174    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:39:10.810181    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:39:10.824547    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:39:10.824562    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:39:10.840476    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:39:10.840489    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:10.852290    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:39:10.852304    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:39:10.863983    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:10.863995    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:10.898628    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:39:10.898641    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:39:10.911365    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:39:10.911381    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:39:10.932075    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:39:10.932084    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:39:10.947832    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:39:10.947843    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:39:10.960376    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:39:10.960387    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:39:10.974144    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:39:10.974159    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:39:10.985824    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:39:10.985839    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:39:10.998427    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:10.998441    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:11.041857    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:39:11.041867    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:39:11.053533    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:39:11.053544    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:39:11.073222    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:39:11.073232    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:39:11.084641    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:39:11.084652    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:39:13.602050    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:18.604254    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:18.604493    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:18.625479    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:39:18.625578    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:18.637373    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:39:18.637453    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:18.648225    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:39:18.648305    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:18.659354    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:39:18.659434    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:18.669546    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:39:18.669635    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:18.680122    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:39:18.680209    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:18.690695    9391 logs.go:282] 0 containers: []
	W1216 03:39:18.690708    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:18.690784    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:18.701670    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:39:18.701685    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:39:18.701690    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:39:18.713323    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:39:18.713336    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:39:18.726748    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:39:18.726760    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:39:18.738431    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:39:18.738442    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:39:18.753927    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:39:18.753937    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:39:18.764952    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:39:18.764965    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:39:18.776393    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:39:18.776404    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:18.788318    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:18.788333    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:18.792773    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:18.792782    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:18.828183    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:39:18.828196    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:39:18.845289    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:39:18.845300    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:39:18.862221    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:18.862232    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:18.903363    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:39:18.903371    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:39:18.915467    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:39:18.915478    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:39:18.927238    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:39:18.927247    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:39:18.944094    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:39:18.944104    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:39:18.956081    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:18.956091    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:18.979157    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:39:18.979167    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:39:18.998037    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:39:18.998049    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:39:21.511748    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:26.513912    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:26.514120    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:26.531402    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:39:26.531497    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:26.542255    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:39:26.542346    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:26.554462    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:39:26.554540    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:26.564979    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:39:26.565063    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:26.575617    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:39:26.575702    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:26.586400    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:39:26.586477    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:26.597095    9391 logs.go:282] 0 containers: []
	W1216 03:39:26.597108    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:26.597169    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:26.607765    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:39:26.607782    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:39:26.607787    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:39:26.620626    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:39:26.620635    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:39:26.637257    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:39:26.637269    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:39:26.654015    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:39:26.654025    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:39:26.665093    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:39:26.665103    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:26.677651    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:39:26.677661    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:39:26.689259    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:26.689271    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:26.730030    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:26.730043    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:26.734968    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:39:26.734976    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:39:26.749976    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:39:26.749992    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:39:26.765592    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:39:26.765602    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:39:26.776830    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:39:26.776840    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:39:26.788500    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:39:26.788512    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:39:26.804152    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:39:26.804162    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:39:26.818029    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:39:26.818041    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:39:26.829926    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:39:26.829940    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:39:26.841631    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:26.841643    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:26.864641    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:26.864649    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:26.898833    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:39:26.898847    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:39:29.413273    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:34.415434    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:34.415671    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:34.433785    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:39:34.433940    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:34.451834    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:39:34.451916    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:34.462972    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:39:34.463062    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:34.473779    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:39:34.473850    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:34.485950    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:39:34.486023    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:34.497214    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:39:34.497302    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:34.515508    9391 logs.go:282] 0 containers: []
	W1216 03:39:34.515521    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:34.515586    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:34.529678    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:39:34.529695    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:39:34.529700    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:39:34.542131    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:39:34.542147    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:39:34.555859    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:39:34.555870    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:39:34.569869    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:39:34.569882    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:39:34.586028    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:34.586039    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:34.628084    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:34.628094    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:34.632143    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:34.632152    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:34.667842    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:39:34.667855    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:39:34.683047    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:39:34.683059    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:39:34.700356    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:39:34.700366    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:39:34.712130    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:39:34.712142    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:39:34.726071    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:39:34.726082    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:39:34.743485    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:34.743500    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:34.767605    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:39:34.767615    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:34.779614    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:39:34.779627    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:39:34.791459    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:39:34.791471    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:39:34.818781    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:39:34.818792    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:39:34.833790    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:39:34.833801    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:39:34.845574    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:39:34.845586    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:39:37.359185    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:42.361495    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:42.362050    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:42.400232    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:39:42.400396    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:42.423276    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:39:42.423415    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:42.439299    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:39:42.439381    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:42.451719    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:39:42.451807    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:42.463120    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:39:42.463195    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:42.474607    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:39:42.474693    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:42.485161    9391 logs.go:282] 0 containers: []
	W1216 03:39:42.485174    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:42.485245    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:42.500720    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:39:42.500737    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:39:42.500742    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:39:42.512841    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:39:42.512852    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:39:42.525176    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:39:42.525186    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:39:42.538675    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:42.538687    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:42.576757    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:39:42.576770    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:39:42.595708    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:39:42.595717    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:39:42.610959    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:39:42.610971    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:39:42.622958    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:39:42.622973    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:39:42.639053    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:39:42.639066    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:39:42.651113    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:42.651122    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:42.674898    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:42.674909    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:42.716768    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:39:42.716776    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:39:42.729839    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:39:42.729848    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:39:42.741519    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:42.741530    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:42.746205    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:39:42.746212    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:39:42.761562    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:39:42.761571    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:39:42.783835    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:39:42.783846    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:42.796040    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:39:42.796050    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:39:42.808993    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:39:42.809005    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:39:45.321890    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:50.324227    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:50.324813    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:50.364238    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:39:50.364398    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:50.385939    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:39:50.386074    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:50.401004    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:39:50.401093    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:50.413420    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:39:50.413505    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:50.424878    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:39:50.424959    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:50.435961    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:39:50.436052    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:50.446416    9391 logs.go:282] 0 containers: []
	W1216 03:39:50.446426    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:50.446494    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:50.457841    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:39:50.457858    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:39:50.457863    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:39:50.469249    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:39:50.469261    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:39:50.481885    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:39:50.481898    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:39:50.494537    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:39:50.494549    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:39:50.506833    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:39:50.506842    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:39:50.518919    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:39:50.518931    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:50.531064    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:50.531076    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:50.553596    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:50.553607    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:50.558672    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:50.558677    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:50.593271    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:39:50.593284    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:39:50.610324    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:39:50.610339    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:39:50.629732    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:39:50.629743    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:39:50.641961    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:39:50.641973    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:39:50.654677    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:50.654688    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:50.697418    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:39:50.697436    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:39:50.710812    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:39:50.710823    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:39:50.734176    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:39:50.734186    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:39:50.748808    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:39:50.748818    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:39:50.765379    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:39:50.765393    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:39:53.284382    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:58.286678    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:58.286921    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:58.307562    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:39:58.307666    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:58.321609    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:39:58.321704    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:58.333298    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:39:58.333384    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:58.343794    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:39:58.343879    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:58.354928    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:39:58.355004    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:58.366020    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:39:58.366094    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:58.376536    9391 logs.go:282] 0 containers: []
	W1216 03:39:58.376548    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:58.376616    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:58.386927    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:39:58.386942    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:39:58.386947    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:39:58.401521    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:39:58.401533    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:39:58.417368    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:39:58.417383    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:58.429456    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:39:58.429466    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:39:58.441245    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:39:58.441258    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:39:58.453162    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:58.453173    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:58.493215    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:39:58.493223    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:39:58.509901    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:39:58.509912    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:39:58.521787    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:39:58.521799    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:39:58.533217    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:39:58.533227    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:39:58.552518    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:58.552529    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:58.575335    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:39:58.575345    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:39:58.586938    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:39:58.586952    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:39:58.604019    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:39:58.604030    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:39:58.624086    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:58.624099    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:58.628445    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:58.628450    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:58.666765    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:39:58.666776    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:39:58.679878    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:39:58.679888    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:39:58.694736    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:39:58.694748    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:40:01.209332    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:06.211695    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:06.212215    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:40:06.247345    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:40:06.247497    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:40:06.269830    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:40:06.269935    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:40:06.283539    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:40:06.283619    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:40:06.294934    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:40:06.295020    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:40:06.306161    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:40:06.306251    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:40:06.317620    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:40:06.317703    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:40:06.328647    9391 logs.go:282] 0 containers: []
	W1216 03:40:06.328659    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:40:06.328729    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:40:06.339526    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:40:06.339549    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:40:06.339554    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:40:06.356717    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:40:06.356729    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:40:06.368883    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:40:06.368894    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:40:06.381109    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:40:06.381122    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:40:06.397963    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:40:06.397976    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:40:06.409817    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:40:06.409830    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:40:06.414712    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:40:06.414721    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:40:06.428355    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:40:06.428367    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:40:06.440496    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:40:06.440507    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:40:06.452500    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:40:06.452512    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:40:06.475007    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:40:06.475014    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:40:06.486932    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:40:06.486943    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:40:06.499077    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:40:06.499089    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:40:06.510751    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:40:06.510760    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:40:06.531441    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:40:06.531452    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:40:06.545829    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:40:06.545842    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:40:06.562775    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:40:06.562791    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:40:06.574959    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:40:06.574970    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:40:06.616437    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:40:06.616445    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:40:09.162269    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:14.164503    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:14.164663    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:40:14.176678    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:40:14.176759    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:40:14.187159    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:40:14.187251    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:40:14.200940    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:40:14.201017    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:40:14.211168    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:40:14.211254    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:40:14.223500    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:40:14.223591    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:40:14.235126    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:40:14.235209    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:40:14.245604    9391 logs.go:282] 0 containers: []
	W1216 03:40:14.245615    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:40:14.245685    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:40:14.256548    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:40:14.256569    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:40:14.256575    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:40:14.267546    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:40:14.267558    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:40:14.283713    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:40:14.283726    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:40:14.306504    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:40:14.306519    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:40:14.328443    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:40:14.328453    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:40:14.344096    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:40:14.344106    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:40:14.358536    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:40:14.358550    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:40:14.369679    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:40:14.369690    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:40:14.384239    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:40:14.384248    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:40:14.396927    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:40:14.396941    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:40:14.433019    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:40:14.433030    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:40:14.446863    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:40:14.446874    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:40:14.459558    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:40:14.459568    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:40:14.476327    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:40:14.476341    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:40:14.488267    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:40:14.488282    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:40:14.501035    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:40:14.501050    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:40:14.512382    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:40:14.512396    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:40:14.556425    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:40:14.556442    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:40:14.561685    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:40:14.561697    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:40:17.075886    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:22.078074    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:22.078199    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:40:22.092771    9391 logs.go:282] 2 containers: [9056c42a4aa0 7c63e33e82f6]
	I1216 03:40:22.092861    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:40:22.104698    9391 logs.go:282] 2 containers: [b67ab6624dc0 b87b58c2b4d0]
	I1216 03:40:22.104777    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:40:22.115023    9391 logs.go:282] 2 containers: [eab31fc651ca 2e0b617c3402]
	I1216 03:40:22.115110    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:40:22.125615    9391 logs.go:282] 2 containers: [e89a76c77d03 25b1601e414a]
	I1216 03:40:22.125696    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:40:22.136302    9391 logs.go:282] 2 containers: [c48e10c0de0e 654d6e59cab6]
	I1216 03:40:22.136385    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:40:22.147324    9391 logs.go:282] 2 containers: [a3eeb38fe662 75574c64ad50]
	I1216 03:40:22.147414    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:40:22.158040    9391 logs.go:282] 0 containers: []
	W1216 03:40:22.158051    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:40:22.158118    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:40:22.168481    9391 logs.go:282] 2 containers: [11e9f061b976 05707f8acd51]
	I1216 03:40:22.168496    9391 logs.go:123] Gathering logs for kube-scheduler [e89a76c77d03] ...
	I1216 03:40:22.168501    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e89a76c77d03"
	I1216 03:40:22.180168    9391 logs.go:123] Gathering logs for kube-proxy [654d6e59cab6] ...
	I1216 03:40:22.180179    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654d6e59cab6"
	I1216 03:40:22.192417    9391 logs.go:123] Gathering logs for etcd [b87b58c2b4d0] ...
	I1216 03:40:22.192428    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87b58c2b4d0"
	I1216 03:40:22.208897    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:40:22.208906    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:40:22.230146    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:40:22.230155    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:40:22.242780    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:40:22.242793    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:40:22.282405    9391 logs.go:123] Gathering logs for kube-apiserver [9056c42a4aa0] ...
	I1216 03:40:22.282419    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9056c42a4aa0"
	I1216 03:40:22.296773    9391 logs.go:123] Gathering logs for kube-apiserver [7c63e33e82f6] ...
	I1216 03:40:22.296787    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c63e33e82f6"
	I1216 03:40:22.313362    9391 logs.go:123] Gathering logs for etcd [b67ab6624dc0] ...
	I1216 03:40:22.313376    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67ab6624dc0"
	I1216 03:40:22.328239    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:40:22.328252    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:40:22.332578    9391 logs.go:123] Gathering logs for coredns [eab31fc651ca] ...
	I1216 03:40:22.332587    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab31fc651ca"
	I1216 03:40:22.345597    9391 logs.go:123] Gathering logs for kube-scheduler [25b1601e414a] ...
	I1216 03:40:22.345608    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25b1601e414a"
	I1216 03:40:22.361995    9391 logs.go:123] Gathering logs for kube-controller-manager [75574c64ad50] ...
	I1216 03:40:22.362006    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75574c64ad50"
	I1216 03:40:22.377914    9391 logs.go:123] Gathering logs for storage-provisioner [11e9f061b976] ...
	I1216 03:40:22.377928    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e9f061b976"
	I1216 03:40:22.397610    9391 logs.go:123] Gathering logs for storage-provisioner [05707f8acd51] ...
	I1216 03:40:22.397621    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05707f8acd51"
	I1216 03:40:22.409697    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:40:22.409708    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:40:22.444928    9391 logs.go:123] Gathering logs for coredns [2e0b617c3402] ...
	I1216 03:40:22.444942    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0b617c3402"
	I1216 03:40:22.457052    9391 logs.go:123] Gathering logs for kube-proxy [c48e10c0de0e] ...
	I1216 03:40:22.457063    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c48e10c0de0e"
	I1216 03:40:22.469022    9391 logs.go:123] Gathering logs for kube-controller-manager [a3eeb38fe662] ...
	I1216 03:40:22.469033    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3eeb38fe662"
	I1216 03:40:24.989461    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:29.991622    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:29.991662    9391 kubeadm.go:597] duration metric: took 4m6.340650292s to restartPrimaryControlPlane
	W1216 03:40:29.991695    9391 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:40:29.991708    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 03:40:31.095948    9391 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.104249208s)
	I1216 03:40:31.096017    9391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:40:31.101159    9391 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:40:31.104071    9391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:40:31.107337    9391 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:40:31.107345    9391 kubeadm.go:157] found existing configuration files:
	
	I1216 03:40:31.107398    9391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/admin.conf
	I1216 03:40:31.110720    9391 kubeadm.go:163] "https://control-plane.minikube.internal:61111" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:40:31.110778    9391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:40:31.114455    9391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/kubelet.conf
	I1216 03:40:31.117487    9391 kubeadm.go:163] "https://control-plane.minikube.internal:61111" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:40:31.117542    9391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:40:31.120012    9391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/controller-manager.conf
	I1216 03:40:31.122884    9391 kubeadm.go:163] "https://control-plane.minikube.internal:61111" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:40:31.122916    9391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:40:31.125608    9391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/scheduler.conf
	I1216 03:40:31.127944    9391 kubeadm.go:163] "https://control-plane.minikube.internal:61111" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61111 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:40:31.127974    9391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:40:31.130643    9391 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 03:40:31.148818    9391 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1216 03:40:31.149049    9391 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 03:40:31.196490    9391 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:40:31.196547    9391 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:40:31.196599    9391 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 03:40:31.243924    9391 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:40:31.247920    9391 out.go:235]   - Generating certificates and keys ...
	I1216 03:40:31.247955    9391 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 03:40:31.247992    9391 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 03:40:31.248055    9391 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:40:31.248104    9391 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:40:31.248203    9391 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:40:31.248287    9391 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 03:40:31.248322    9391 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:40:31.248357    9391 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:40:31.248397    9391 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:40:31.248452    9391 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:40:31.248497    9391 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 03:40:31.248543    9391 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:40:31.277262    9391 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:40:31.349715    9391 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:40:31.527499    9391 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:40:31.746759    9391 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:40:31.777110    9391 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:40:31.777447    9391 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:40:31.777526    9391 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 03:40:31.862394    9391 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:40:31.869579    9391 out.go:235]   - Booting up control plane ...
	I1216 03:40:31.869630    9391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:40:31.869670    9391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:40:31.869702    9391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:40:31.869748    9391 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:40:31.869881    9391 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 03:40:36.372437    9391 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502362 seconds
	I1216 03:40:36.372503    9391 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:40:36.375999    9391 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:40:36.886840    9391 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:40:36.887151    9391 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-993000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:40:37.393349    9391 kubeadm.go:310] [bootstrap-token] Using token: lol1lb.540mknab2j0zikys
	I1216 03:40:37.396646    9391 out.go:235]   - Configuring RBAC rules ...
	I1216 03:40:37.396712    9391 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:40:37.396761    9391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:40:37.398957    9391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:40:37.403518    9391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:40:37.404164    9391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:40:37.404997    9391 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:40:37.408597    9391 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:40:37.588517    9391 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 03:40:37.796916    9391 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 03:40:37.797324    9391 kubeadm.go:310] 
	I1216 03:40:37.797357    9391 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 03:40:37.797369    9391 kubeadm.go:310] 
	I1216 03:40:37.797418    9391 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 03:40:37.797424    9391 kubeadm.go:310] 
	I1216 03:40:37.797442    9391 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 03:40:37.797472    9391 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:40:37.797505    9391 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:40:37.797508    9391 kubeadm.go:310] 
	I1216 03:40:37.797539    9391 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 03:40:37.797542    9391 kubeadm.go:310] 
	I1216 03:40:37.797570    9391 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:40:37.797575    9391 kubeadm.go:310] 
	I1216 03:40:37.797602    9391 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 03:40:37.797651    9391 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:40:37.797694    9391 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:40:37.797697    9391 kubeadm.go:310] 
	I1216 03:40:37.797745    9391 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:40:37.797791    9391 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 03:40:37.797794    9391 kubeadm.go:310] 
	I1216 03:40:37.797839    9391 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lol1lb.540mknab2j0zikys \
	I1216 03:40:37.797893    9391 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e91f5fc61f2a05b89f8c1b39ba5f2828ed76713601e7dc43cc58f3c0bc6e1119 \
	I1216 03:40:37.797906    9391 kubeadm.go:310] 	--control-plane 
	I1216 03:40:37.797909    9391 kubeadm.go:310] 
	I1216 03:40:37.797950    9391 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:40:37.797954    9391 kubeadm.go:310] 
	I1216 03:40:37.797993    9391 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lol1lb.540mknab2j0zikys \
	I1216 03:40:37.798047    9391 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e91f5fc61f2a05b89f8c1b39ba5f2828ed76713601e7dc43cc58f3c0bc6e1119 
	I1216 03:40:37.798105    9391 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:40:37.798112    9391 cni.go:84] Creating CNI manager for ""
	I1216 03:40:37.798119    9391 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:40:37.801179    9391 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:40:37.807139    9391 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:40:37.810306    9391 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:40:37.816128    9391 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:40:37.816202    9391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:40:37.816236    9391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-993000 minikube.k8s.io/updated_at=2024_12_16T03_40_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=running-upgrade-993000 minikube.k8s.io/primary=true
	I1216 03:40:37.867808    9391 kubeadm.go:1113] duration metric: took 51.668792ms to wait for elevateKubeSystemPrivileges
	I1216 03:40:37.867823    9391 ops.go:34] apiserver oom_adj: -16
	I1216 03:40:37.867827    9391 kubeadm.go:394] duration metric: took 4m14.232490875s to StartCluster
	I1216 03:40:37.867836    9391 settings.go:142] acquiring lock: {Name:mk408f6daa5d140b3b9f5d3d2f79a1d62bbf39fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:40:37.867946    9391 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:40:37.868374    9391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/kubeconfig: {Name:mk517290cc56e622570f1566006f8aa91b83e6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:40:37.868565    9391 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:40:37.868595    9391 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:40:37.868652    9391 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-993000"
	I1216 03:40:37.868661    9391 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-993000"
	W1216 03:40:37.868695    9391 addons.go:243] addon storage-provisioner should already be in state true
	I1216 03:40:37.868709    9391 host.go:66] Checking if "running-upgrade-993000" exists ...
	I1216 03:40:37.868665    9391 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-993000"
	I1216 03:40:37.868743    9391 config.go:182] Loaded profile config "running-upgrade-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 03:40:37.868777    9391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-993000"
	I1216 03:40:37.870174    9391 kapi.go:59] client config for running-upgrade-993000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/running-upgrade-993000/client.key", CAFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023d6f70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:40:37.870295    9391 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-993000"
	W1216 03:40:37.870300    9391 addons.go:243] addon default-storageclass should already be in state true
	I1216 03:40:37.870306    9391 host.go:66] Checking if "running-upgrade-993000" exists ...
	I1216 03:40:37.873205    9391 out.go:177] * Verifying Kubernetes components...
	I1216 03:40:37.873541    9391 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:40:37.877332    9391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:40:37.877338    9391 sshutil.go:53] new ssh client: &{IP:localhost Port:61015 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/running-upgrade-993000/id_rsa Username:docker}
	I1216 03:40:37.881122    9391 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:40:37.884204    9391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:40:37.888191    9391 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:40:37.888197    9391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:40:37.888203    9391 sshutil.go:53] new ssh client: &{IP:localhost Port:61015 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/running-upgrade-993000/id_rsa Username:docker}
	I1216 03:40:37.986620    9391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:40:37.993926    9391 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:40:37.993999    9391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:40:37.995944    9391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:40:38.000791    9391 api_server.go:72] duration metric: took 132.215333ms to wait for apiserver process to appear ...
	I1216 03:40:38.000802    9391 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:40:38.000810    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:38.022826    9391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:40:38.346069    9391 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 03:40:38.346080    9391 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 03:40:43.002779    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:43.002809    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:48.003331    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:48.003385    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:53.003732    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:53.003774    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:58.004315    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:58.004357    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:03.005031    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:03.005074    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:08.006243    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:08.006305    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1216 03:41:08.347773    9391 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1216 03:41:08.352176    9391 out.go:177] * Enabled addons: storage-provisioner
	I1216 03:41:08.360109    9391 addons.go:510] duration metric: took 30.492084167s for enable addons: enabled=[storage-provisioner]
	I1216 03:41:13.007463    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:13.007504    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:18.008037    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:18.008079    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:23.009735    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:23.009780    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:28.010939    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:28.010953    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:33.013004    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:33.013031    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:38.015136    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:38.015298    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:41:38.027896    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:41:38.027983    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:41:38.039253    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:41:38.039338    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:41:38.050046    9391 logs.go:282] 2 containers: [97afd2e0fbcd 921c9f899dad]
	I1216 03:41:38.050127    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:41:38.060628    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:41:38.060717    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:41:38.071048    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:41:38.071127    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:41:38.081882    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:41:38.081956    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:41:38.092401    9391 logs.go:282] 0 containers: []
	W1216 03:41:38.092412    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:41:38.092480    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:41:38.102997    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:41:38.103013    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:41:38.103019    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:41:38.114504    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:41:38.114516    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:41:38.128927    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:41:38.128938    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:41:38.151169    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:41:38.151181    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:41:38.163808    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:41:38.163820    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:41:38.176431    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:41:38.176445    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:41:38.200295    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:41:38.200303    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:41:38.212059    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:41:38.212070    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:41:38.231546    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:41:38.231556    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:41:38.269730    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:41:38.269826    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:41:38.270307    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:41:38.270314    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:41:38.275235    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:41:38.275246    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:41:38.310430    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:41:38.310441    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:41:38.325568    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:41:38.325579    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:41:38.338524    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:41:38.338547    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:41:38.338575    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:41:38.338585    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:41:38.338588    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:41:38.338595    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:41:38.338597    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:41:48.342513    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:53.343588    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:53.343720    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:41:53.355338    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:41:53.355427    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:41:53.366209    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:41:53.366286    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:41:53.376741    9391 logs.go:282] 2 containers: [97afd2e0fbcd 921c9f899dad]
	I1216 03:41:53.376817    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:41:53.387216    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:41:53.387298    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:41:53.398106    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:41:53.398195    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:41:53.408639    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:41:53.408718    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:41:53.418962    9391 logs.go:282] 0 containers: []
	W1216 03:41:53.418971    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:41:53.419034    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:41:53.429568    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:41:53.429583    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:41:53.429589    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:41:53.441249    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:41:53.441259    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:41:53.482688    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:41:53.482698    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:41:53.498902    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:41:53.498912    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:41:53.513207    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:41:53.513217    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:41:53.525280    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:41:53.525289    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:41:53.536946    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:41:53.536956    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:41:53.552447    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:41:53.552459    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:41:53.589859    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:41:53.589952    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:41:53.590448    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:41:53.590454    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:41:53.594891    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:41:53.594897    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:41:53.618630    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:41:53.618638    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:41:53.632714    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:41:53.632724    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:41:53.651487    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:41:53.651498    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:41:53.663413    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:41:53.663423    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:41:53.663448    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:41:53.663453    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:41:53.663456    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:41:53.663460    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:41:53.663463    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:42:03.667393    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:08.669596    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:08.669713    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:08.681808    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:42:08.681894    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:08.692629    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:42:08.692707    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:08.702633    9391 logs.go:282] 2 containers: [97afd2e0fbcd 921c9f899dad]
	I1216 03:42:08.702719    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:08.718379    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:42:08.718463    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:08.736144    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:42:08.736226    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:08.746677    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:42:08.746758    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:08.756783    9391 logs.go:282] 0 containers: []
	W1216 03:42:08.756796    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:08.756867    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:08.767802    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:42:08.767815    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:42:08.767821    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:42:08.782621    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:42:08.782630    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:42:08.800794    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:42:08.800807    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:42:08.812499    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:08.812512    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:42:08.851617    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:42:08.851711    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:42:08.852206    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:08.852212    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:08.856921    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:42:08.856931    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:42:08.868981    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:42:08.868993    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:42:08.880719    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:42:08.880731    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:42:08.892191    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:08.892199    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:08.917069    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:42:08.917075    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:08.928948    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:08.928960    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:08.968110    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:42:08.968120    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:42:08.982439    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:42:08.982449    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:42:08.996838    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:42:08.996851    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:42:08.996875    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:42:08.996879    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:42:08.996882    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:42:08.996886    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:42:08.996889    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:42:19.000864    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:24.001118    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:24.001286    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:24.015052    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:42:24.015135    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:24.026154    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:42:24.026233    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:24.036813    9391 logs.go:282] 2 containers: [97afd2e0fbcd 921c9f899dad]
	I1216 03:42:24.036896    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:24.047687    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:42:24.047754    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:24.058422    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:42:24.058489    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:24.070904    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:42:24.070974    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:24.081074    9391 logs.go:282] 0 containers: []
	W1216 03:42:24.081087    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:24.081162    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:24.091917    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:42:24.091933    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:42:24.091938    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:24.106760    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:24.106771    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:42:24.146159    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:42:24.146255    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:42:24.146737    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:24.146746    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:24.185027    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:42:24.185040    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:42:24.200183    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:42:24.200196    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:42:24.212588    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:42:24.212602    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:42:24.224100    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:24.224113    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:24.248971    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:24.248981    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:24.253353    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:42:24.253362    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:42:24.267855    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:42:24.267869    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:42:24.282112    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:42:24.282124    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:42:24.298061    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:42:24.298073    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:42:24.314469    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:42:24.314480    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:42:24.333280    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:42:24.333292    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:42:24.333320    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:42:24.333325    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:42:24.333328    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:42:24.333332    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:42:24.333335    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:42:34.337307    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:39.338885    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:39.338990    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:39.350596    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:42:39.350671    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:39.361952    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:42:39.362030    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:39.372439    9391 logs.go:282] 2 containers: [97afd2e0fbcd 921c9f899dad]
	I1216 03:42:39.372515    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:39.382671    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:42:39.382748    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:39.393147    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:42:39.393227    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:39.403941    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:42:39.404016    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:39.414215    9391 logs.go:282] 0 containers: []
	W1216 03:42:39.414231    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:39.414290    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:39.425936    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:42:39.425955    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:42:39.425960    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:42:39.437883    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:39.437899    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:39.461196    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:39.461205    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:42:39.497782    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:42:39.497873    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:42:39.498339    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:42:39.498343    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:42:39.519181    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:42:39.519191    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:42:39.532806    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:42:39.532816    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:42:39.550646    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:42:39.550659    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:42:39.566820    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:42:39.566833    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:42:39.578408    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:42:39.578418    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:39.590094    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:39.590104    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:39.595069    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:39.595075    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:39.630437    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:42:39.630453    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:42:39.645574    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:42:39.645584    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:42:39.657965    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:42:39.657976    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:42:39.658002    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:42:39.658007    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:42:39.658055    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:42:39.658085    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:42:39.658102    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:42:49.659733    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:54.661936    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:54.662106    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:54.674369    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:42:54.674464    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:54.685256    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:42:54.685340    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:54.696197    9391 logs.go:282] 4 containers: [aee1ac97e303 c9179bee06cd 97afd2e0fbcd 921c9f899dad]
	I1216 03:42:54.696278    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:54.707626    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:42:54.707703    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:54.717981    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:42:54.718059    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:54.728736    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:42:54.728819    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:54.738661    9391 logs.go:282] 0 containers: []
	W1216 03:42:54.738678    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:54.738746    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:54.751961    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:42:54.751980    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:54.751985    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:42:54.788733    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:42:54.788825    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:42:54.789291    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:54.789296    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:54.827088    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:42:54.827098    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:42:54.847795    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:54.847808    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:54.872734    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:54.872741    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:54.877610    9391 logs.go:123] Gathering logs for coredns [aee1ac97e303] ...
	I1216 03:42:54.877617    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee1ac97e303"
	I1216 03:42:54.899643    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:42:54.899654    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:42:54.911241    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:42:54.911252    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:42:54.925928    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:42:54.925939    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:42:54.940579    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:42:54.940588    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:42:54.958413    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:42:54.958425    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:54.969922    9391 logs.go:123] Gathering logs for coredns [c9179bee06cd] ...
	I1216 03:42:54.969935    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9179bee06cd"
	I1216 03:42:54.981538    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:42:54.981549    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:42:54.993692    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:42:54.993702    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:42:55.009026    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:42:55.009037    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:42:55.021073    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:42:55.021087    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:42:55.021115    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:42:55.021119    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:42:55.021124    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:42:55.021128    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:42:55.021131    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:43:05.025034    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:10.025824    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:10.025935    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:10.037718    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:43:10.037801    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:10.049424    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:43:10.049505    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:10.060883    9391 logs.go:282] 4 containers: [aee1ac97e303 c9179bee06cd 97afd2e0fbcd 921c9f899dad]
	I1216 03:43:10.060963    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:10.072086    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:43:10.072163    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:10.083489    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:43:10.083572    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:10.095522    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:43:10.095596    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:10.111234    9391 logs.go:282] 0 containers: []
	W1216 03:43:10.111244    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:10.111310    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:10.127116    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:43:10.127132    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:10.127136    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:43:10.169832    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:43:10.169931    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:43:10.170409    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:10.170415    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:10.175486    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:10.175493    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:10.211793    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:43:10.211804    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:43:10.226695    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:43:10.226706    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:43:10.242136    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:43:10.242147    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:43:10.254039    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:43:10.254052    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:43:10.268992    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:43:10.269003    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:43:10.287136    9391 logs.go:123] Gathering logs for coredns [c9179bee06cd] ...
	I1216 03:43:10.287149    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9179bee06cd"
	I1216 03:43:10.299387    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:43:10.299399    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:10.311643    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:43:10.311652    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:43:10.327030    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:43:10.327039    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:43:10.339677    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:10.339686    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:10.363405    9391 logs.go:123] Gathering logs for coredns [aee1ac97e303] ...
	I1216 03:43:10.363413    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee1ac97e303"
	I1216 03:43:10.374798    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:43:10.374809    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:43:10.386316    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:43:10.386326    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:43:10.386352    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:43:10.386357    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:43:10.386360    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:43:10.386364    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:43:10.386367    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:43:20.390285    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:25.392447    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:25.392626    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:25.404704    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:43:25.404783    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:25.415797    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:43:25.415868    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:25.426619    9391 logs.go:282] 4 containers: [aee1ac97e303 c9179bee06cd 97afd2e0fbcd 921c9f899dad]
	I1216 03:43:25.426690    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:25.437262    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:43:25.437341    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:25.448130    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:43:25.448199    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:25.459469    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:43:25.459536    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:25.469708    9391 logs.go:282] 0 containers: []
	W1216 03:43:25.469719    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:25.469792    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:25.479803    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:43:25.479820    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:25.479825    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:43:25.516355    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:43:25.516465    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:43:25.516964    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:43:25.516969    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:43:25.533053    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:43:25.533064    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:43:25.552029    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:43:25.552044    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:43:25.568659    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:43:25.568669    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:43:25.587991    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:25.588000    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:25.628284    9391 logs.go:123] Gathering logs for coredns [aee1ac97e303] ...
	I1216 03:43:25.628297    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee1ac97e303"
	I1216 03:43:25.641955    9391 logs.go:123] Gathering logs for coredns [c9179bee06cd] ...
	I1216 03:43:25.641965    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9179bee06cd"
	I1216 03:43:25.654636    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:43:25.654647    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:43:25.670147    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:43:25.670161    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:43:25.685550    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:25.685561    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:25.690848    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:43:25.690859    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:43:25.705188    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:43:25.705196    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:43:25.717867    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:25.717878    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:25.744889    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:43:25.744902    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:25.757988    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:43:25.757999    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:43:25.758026    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:43:25.758032    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:43:25.758046    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:43:25.758050    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:43:25.758054    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:43:35.762024    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:40.764263    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:40.764476    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:40.778813    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:43:40.778890    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:40.791416    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:43:40.791503    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:40.802581    9391 logs.go:282] 4 containers: [aee1ac97e303 c9179bee06cd 97afd2e0fbcd 921c9f899dad]
	I1216 03:43:40.802656    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:40.813435    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:43:40.813517    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:40.824288    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:43:40.824355    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:40.835156    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:43:40.835228    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:40.845612    9391 logs.go:282] 0 containers: []
	W1216 03:43:40.845626    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:40.845686    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:40.856441    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:43:40.856459    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:40.856464    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:40.879808    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:40.879816    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:40.884749    9391 logs.go:123] Gathering logs for coredns [c9179bee06cd] ...
	I1216 03:43:40.884758    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9179bee06cd"
	I1216 03:43:40.896575    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:43:40.896584    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:43:40.908323    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:43:40.908335    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:43:40.920512    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:43:40.920522    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:43:40.938921    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:43:40.938931    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:43:40.956827    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:43:40.956842    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:43:40.970951    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:43:40.970963    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:43:40.985072    9391 logs.go:123] Gathering logs for coredns [aee1ac97e303] ...
	I1216 03:43:40.985083    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee1ac97e303"
	I1216 03:43:40.996821    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:43:40.996831    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:43:41.016500    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:41.016511    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:43:41.054287    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:43:41.054379    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:43:41.054849    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:41.054853    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:41.088816    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:43:41.088827    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:43:41.100989    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:43:41.100999    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:41.113069    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:43:41.113078    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:43:41.113103    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:43:41.113107    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:43:41.113112    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:43:41.113115    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:43:41.113118    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:43:51.117068    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:56.119363    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:56.119563    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:56.133761    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:43:56.133855    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:56.145764    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:43:56.145840    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:56.162052    9391 logs.go:282] 4 containers: [aee1ac97e303 c9179bee06cd 97afd2e0fbcd 921c9f899dad]
	I1216 03:43:56.162132    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:56.172863    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:43:56.172932    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:56.183933    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:43:56.184019    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:56.194643    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:43:56.194711    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:56.204986    9391 logs.go:282] 0 containers: []
	W1216 03:43:56.205001    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:56.205065    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:56.215895    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:43:56.215913    9391 logs.go:123] Gathering logs for coredns [aee1ac97e303] ...
	I1216 03:43:56.215920    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee1ac97e303"
	I1216 03:43:56.228301    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:43:56.228310    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:43:56.242052    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:56.242061    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:56.265081    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:56.265088    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:43:56.302884    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:43:56.302984    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:43:56.303484    9391 logs.go:123] Gathering logs for coredns [c9179bee06cd] ...
	I1216 03:43:56.303490    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9179bee06cd"
	I1216 03:43:56.315521    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:43:56.315531    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:43:56.333391    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:43:56.333402    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:56.345093    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:56.345104    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:56.350515    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:56.350522    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:56.385051    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:43:56.385063    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:43:56.399672    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:43:56.399682    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:43:56.418631    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:43:56.418640    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:43:56.432656    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:43:56.432666    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:43:56.447995    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:43:56.448005    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:43:56.459866    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:43:56.459876    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:43:56.471677    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:43:56.471687    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:43:56.471714    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:43:56.471717    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:43:56.471720    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:43:56.471723    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:43:56.471726    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:44:06.475694    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:44:11.477014    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:44:11.477224    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:44:11.499347    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:44:11.499437    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:44:11.511089    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:44:11.511169    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:44:11.521942    9391 logs.go:282] 4 containers: [aee1ac97e303 c9179bee06cd 97afd2e0fbcd 921c9f899dad]
	I1216 03:44:11.522024    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:44:11.532455    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:44:11.532531    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:44:11.543317    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:44:11.543395    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:44:11.554043    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:44:11.554125    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:44:11.563966    9391 logs.go:282] 0 containers: []
	W1216 03:44:11.563975    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:11.564045    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:44:11.577565    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:44:11.577582    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:44:11.577587    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:44:11.601141    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:11.601152    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:44:11.637137    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:44:11.637148    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:44:11.652648    9391 logs.go:123] Gathering logs for coredns [aee1ac97e303] ...
	I1216 03:44:11.652658    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee1ac97e303"
	I1216 03:44:11.670815    9391 logs.go:123] Gathering logs for coredns [c9179bee06cd] ...
	I1216 03:44:11.670828    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9179bee06cd"
	I1216 03:44:11.682831    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:44:11.682844    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:44:11.698375    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:44:11.698386    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:44:11.710407    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:44:11.710420    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:11.722001    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:44:11.722013    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:44:11.738618    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:44:11.738627    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:44:11.753127    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:11.753140    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:44:11.791454    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:44:11.791547    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:44:11.792046    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:11.792056    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:11.796702    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:44:11.796710    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:44:11.811661    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:44:11.811672    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:44:11.823499    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:44:11.823511    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:44:11.841113    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:44:11.841126    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:44:11.841153    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:44:11.841158    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:44:11.841163    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:44:11.841167    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:44:11.841171    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:44:21.845091    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:44:26.847222    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:44:26.847406    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:44:26.868698    9391 logs.go:282] 1 containers: [565581d1ca75]
	I1216 03:44:26.868782    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:44:26.884717    9391 logs.go:282] 1 containers: [da9d31681c48]
	I1216 03:44:26.884804    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:44:26.910259    9391 logs.go:282] 4 containers: [aee1ac97e303 c9179bee06cd 97afd2e0fbcd 921c9f899dad]
	I1216 03:44:26.910354    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:44:26.922892    9391 logs.go:282] 1 containers: [9587b1d976d2]
	I1216 03:44:26.922974    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:44:26.937487    9391 logs.go:282] 1 containers: [20c87152ba22]
	I1216 03:44:26.937568    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:44:26.948710    9391 logs.go:282] 1 containers: [ecb76521f41a]
	I1216 03:44:26.948795    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:44:26.959191    9391 logs.go:282] 0 containers: []
	W1216 03:44:26.959203    9391 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:26.959279    9391 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:44:26.969129    9391 logs.go:282] 1 containers: [2f0388bb5160]
	I1216 03:44:26.969145    9391 logs.go:123] Gathering logs for etcd [da9d31681c48] ...
	I1216 03:44:26.969151    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9d31681c48"
	I1216 03:44:26.983341    9391 logs.go:123] Gathering logs for coredns [c9179bee06cd] ...
	I1216 03:44:26.983354    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9179bee06cd"
	I1216 03:44:26.995454    9391 logs.go:123] Gathering logs for coredns [97afd2e0fbcd] ...
	I1216 03:44:26.995467    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97afd2e0fbcd"
	I1216 03:44:27.007163    9391 logs.go:123] Gathering logs for kube-proxy [20c87152ba22] ...
	I1216 03:44:27.007175    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20c87152ba22"
	I1216 03:44:27.019034    9391 logs.go:123] Gathering logs for container status ...
	I1216 03:44:27.019047    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:27.030624    9391 logs.go:123] Gathering logs for kube-apiserver [565581d1ca75] ...
	I1216 03:44:27.030638    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 565581d1ca75"
	I1216 03:44:27.044883    9391 logs.go:123] Gathering logs for kube-controller-manager [ecb76521f41a] ...
	I1216 03:44:27.044896    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb76521f41a"
	I1216 03:44:27.062859    9391 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:27.062871    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 03:44:27.099873    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:44:27.099966    9391 logs.go:138] Found kubelet problem: Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:44:27.100449    9391 logs.go:123] Gathering logs for coredns [921c9f899dad] ...
	I1216 03:44:27.100454    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 921c9f899dad"
	I1216 03:44:27.112245    9391 logs.go:123] Gathering logs for kube-scheduler [9587b1d976d2] ...
	I1216 03:44:27.112256    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9587b1d976d2"
	I1216 03:44:27.127784    9391 logs.go:123] Gathering logs for storage-provisioner [2f0388bb5160] ...
	I1216 03:44:27.127794    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0388bb5160"
	I1216 03:44:27.139186    9391 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:27.139199    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:27.143783    9391 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:27.143791    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:44:27.183905    9391 logs.go:123] Gathering logs for coredns [aee1ac97e303] ...
	I1216 03:44:27.183916    9391 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aee1ac97e303"
	I1216 03:44:27.200289    9391 logs.go:123] Gathering logs for Docker ...
	I1216 03:44:27.200302    9391 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:44:27.223078    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:44:27.223089    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 03:44:27.223113    9391 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1216 03:44:27.223117    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	W1216 03:44:27.223120    9391 out.go:270]   Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	  Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	I1216 03:44:27.223124    9391 out.go:358] Setting ErrFile to fd 2...
	I1216 03:44:27.223139    9391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:44:37.225726    9391 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:44:42.226678    9391 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:44:42.231217    9391 out.go:201] 
	W1216 03:44:42.237176    9391 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1216 03:44:42.237182    9391 out.go:270] * 
	* 
	W1216 03:44:42.237619    9391 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:44:42.247121    9391 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-993000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-16 03:44:42.322227 -0800 PST m=+1272.123674001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-993000 -n running-upgrade-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-993000 -n running-upgrade-993000: exit status 2 (15.6906475s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-993000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-989000 sudo cat                            | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo cat                            | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo                                | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo                                | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo                                | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo cat                            | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo cat                            | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo                                | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo                                | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo                                | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo find                           | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-989000 sudo crio                           | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-989000                                     | cilium-989000             | jenkins | v1.34.0 | 16 Dec 24 03:34 PST | 16 Dec 24 03:34 PST |
	| start   | -p kubernetes-upgrade-939000                         | kubernetes-upgrade-939000 | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-803000                             | offline-docker-803000     | jenkins | v1.34.0 | 16 Dec 24 03:34 PST | 16 Dec 24 03:34 PST |
	| start   | -p stopped-upgrade-873000                            | minikube                  | jenkins | v1.26.0 | 16 Dec 24 03:34 PST | 16 Dec 24 03:35 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-939000                         | kubernetes-upgrade-939000 | jenkins | v1.34.0 | 16 Dec 24 03:34 PST | 16 Dec 24 03:34 PST |
	| start   | -p kubernetes-upgrade-939000                         | kubernetes-upgrade-939000 | jenkins | v1.34.0 | 16 Dec 24 03:34 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-939000                         | kubernetes-upgrade-939000 | jenkins | v1.34.0 | 16 Dec 24 03:34 PST | 16 Dec 24 03:34 PST |
	| start   | -p running-upgrade-993000                            | minikube                  | jenkins | v1.26.0 | 16 Dec 24 03:34 PST | 16 Dec 24 03:35 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-873000 stop                          | minikube                  | jenkins | v1.26.0 | 16 Dec 24 03:35 PST | 16 Dec 24 03:35 PST |
	| start   | -p stopped-upgrade-873000                            | stopped-upgrade-873000    | jenkins | v1.34.0 | 16 Dec 24 03:35 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-993000                            | running-upgrade-993000    | jenkins | v1.34.0 | 16 Dec 24 03:35 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-873000                            | stopped-upgrade-873000    | jenkins | v1.34.0 | 16 Dec 24 03:44 PST | 16 Dec 24 03:44 PST |
	| start   | -p pause-551000 --memory=2048                        | pause-551000              | jenkins | v1.34.0 | 16 Dec 24 03:44 PST |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 03:44:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:44:54.026215    9561 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:44:54.026361    9561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:44:54.026363    9561 out.go:358] Setting ErrFile to fd 2...
	I1216 03:44:54.026364    9561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:44:54.026482    9561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:44:54.027707    9561 out.go:352] Setting JSON to false
	I1216 03:44:54.046983    9561 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6265,"bootTime":1734343229,"procs":573,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:44:54.047059    9561 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:44:54.052352    9561 out.go:177] * [pause-551000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:44:54.059320    9561 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:44:54.059373    9561 notify.go:220] Checking for updates...
	I1216 03:44:54.068317    9561 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:44:54.071346    9561 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:44:54.074336    9561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:44:54.077269    9561 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:44:54.080301    9561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:44:54.083615    9561 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:44:54.083683    9561 config.go:182] Loaded profile config "running-upgrade-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 03:44:54.083730    9561 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:44:54.087357    9561 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:44:54.094318    9561 start.go:297] selected driver: qemu2
	I1216 03:44:54.094322    9561 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:44:54.094330    9561 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:44:54.097114    9561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:44:54.098843    9561 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:44:54.103378    9561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:44:54.103389    9561 cni.go:84] Creating CNI manager for ""
	I1216 03:44:54.103407    9561 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:44:54.103409    9561 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:44:54.103440    9561 start.go:340] cluster config:
	{Name:pause-551000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-551000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:44:54.108532    9561 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:44:54.117311    9561 out.go:177] * Starting "pause-551000" primary control-plane node in "pause-551000" cluster
	I1216 03:44:54.121352    9561 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:44:54.121364    9561 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:44:54.121372    9561 cache.go:56] Caching tarball of preloaded images
	I1216 03:44:54.121435    9561 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:44:54.121439    9561 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:44:54.121486    9561 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/pause-551000/config.json ...
	I1216 03:44:54.121495    9561 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/pause-551000/config.json: {Name:mk62358ee815d2486600f40f27556b648582a58d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:44:54.122048    9561 start.go:360] acquireMachinesLock for pause-551000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:44:54.122089    9561 start.go:364] duration metric: took 37.625µs to acquireMachinesLock for "pause-551000"
	I1216 03:44:54.122097    9561 start.go:93] Provisioning new machine with config: &{Name:pause-551000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:pause-551000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:44:54.122138    9561 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:44:54.130339    9561 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 03:44:54.152821    9561 start.go:159] libmachine.API.Create for "pause-551000" (driver="qemu2")
	I1216 03:44:54.152850    9561 client.go:168] LocalClient.Create starting
	I1216 03:44:54.152955    9561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:44:54.152990    9561 main.go:141] libmachine: Decoding PEM data...
	I1216 03:44:54.152998    9561 main.go:141] libmachine: Parsing certificate...
	I1216 03:44:54.153037    9561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:44:54.153064    9561 main.go:141] libmachine: Decoding PEM data...
	I1216 03:44:54.153070    9561 main.go:141] libmachine: Parsing certificate...
	I1216 03:44:54.153644    9561 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:44:54.359938    9561 main.go:141] libmachine: Creating SSH key...
	I1216 03:44:54.483837    9561 main.go:141] libmachine: Creating Disk image...
	I1216 03:44:54.483843    9561 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:44:54.484609    9561 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/disk.qcow2
	I1216 03:44:54.512212    9561 main.go:141] libmachine: STDOUT: 
	I1216 03:44:54.512229    9561 main.go:141] libmachine: STDERR: 
	I1216 03:44:54.512289    9561 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/disk.qcow2 +20000M
	I1216 03:44:54.521704    9561 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:44:54.521717    9561 main.go:141] libmachine: STDERR: 
	I1216 03:44:54.521735    9561 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/disk.qcow2
	I1216 03:44:54.521738    9561 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:44:54.521747    9561 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:44:54.521775    9561 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:64:74:54:e3:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/pause-551000/disk.qcow2
	I1216 03:44:54.524156    9561 main.go:141] libmachine: STDOUT: 
	I1216 03:44:54.524166    9561 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:44:54.524187    9561 client.go:171] duration metric: took 371.337875ms to LocalClient.Create
	I1216 03:44:56.526246    9561 start.go:128] duration metric: took 2.404141042s to createHost
	I1216 03:44:56.526277    9561 start.go:83] releasing machines lock for "pause-551000", held for 2.40422875s
	W1216 03:44:56.526312    9561 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:44:56.547319    9561 out.go:177] * Deleting "pause-551000" in qemu2 ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-12-16 11:35:20 UTC, ends at Mon 2024-12-16 11:44:58 UTC. --
	Dec 16 11:44:39 running-upgrade-993000 dockerd[4407]: time="2024-12-16T11:44:39.203041941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 16 11:44:39 running-upgrade-993000 dockerd[4407]: time="2024-12-16T11:44:39.203076689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 16 11:44:39 running-upgrade-993000 dockerd[4407]: time="2024-12-16T11:44:39.203191641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 16 11:44:39 running-upgrade-993000 dockerd[4407]: time="2024-12-16T11:44:39.203309718Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/daa93b53bcc986fe9491be3ac0254ef30dc9fcf99d28196de72e5413d3a126c6 pid=19225 runtime=io.containerd.runc.v2
	Dec 16 11:44:40 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:40Z" level=error msg="ContainerStats resp: {0x400074c840 linux}"
	Dec 16 11:44:41 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:41Z" level=error msg="ContainerStats resp: {0x400074d3c0 linux}"
	Dec 16 11:44:41 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:41Z" level=error msg="ContainerStats resp: {0x4000786e40 linux}"
	Dec 16 11:44:41 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:41Z" level=error msg="ContainerStats resp: {0x4000787580 linux}"
	Dec 16 11:44:41 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:41Z" level=error msg="ContainerStats resp: {0x4000787640 linux}"
	Dec 16 11:44:41 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:41Z" level=error msg="ContainerStats resp: {0x4000787700 linux}"
	Dec 16 11:44:41 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:41Z" level=error msg="ContainerStats resp: {0x40008281c0 linux}"
	Dec 16 11:44:41 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:41Z" level=error msg="ContainerStats resp: {0x4000399280 linux}"
	Dec 16 11:44:43 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 16 11:44:48 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 16 11:44:51 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:51Z" level=error msg="ContainerStats resp: {0x400057bcc0 linux}"
	Dec 16 11:44:51 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:51Z" level=error msg="ContainerStats resp: {0x4000965c40 linux}"
	Dec 16 11:44:52 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:52Z" level=error msg="ContainerStats resp: {0x40004f33c0 linux}"
	Dec 16 11:44:53 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:53Z" level=error msg="ContainerStats resp: {0x40008284c0 linux}"
	Dec 16 11:44:53 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:53Z" level=error msg="ContainerStats resp: {0x4000399900 linux}"
	Dec 16 11:44:53 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:53Z" level=error msg="ContainerStats resp: {0x40008291c0 linux}"
	Dec 16 11:44:53 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:53Z" level=error msg="ContainerStats resp: {0x4000829740 linux}"
	Dec 16 11:44:53 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:53Z" level=error msg="ContainerStats resp: {0x4000829b40 linux}"
	Dec 16 11:44:53 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:53Z" level=error msg="ContainerStats resp: {0x400054e940 linux}"
	Dec 16 11:44:53 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:53Z" level=error msg="ContainerStats resp: {0x4000816540 linux}"
	Dec 16 11:44:53 running-upgrade-993000 cri-dockerd[4130]: time="2024-12-16T11:44:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	daa93b53bcc98       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   a645ff39f375e
	fa2e62674a090       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   61526e2bbe341
	aee1ac97e3038       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   a645ff39f375e
	c9179bee06cd1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   61526e2bbe341
	2f0388bb5160d       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   3aa248e6e7c9c
	20c87152ba228       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   be2bc38522422
	da9d31681c486       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   f4f77e6b85751
	9587b1d976d2b       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   a6a677cb82d7d
	565581d1ca752       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   805fe2fdfe25b
	ecb76521f41a3       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   60b880a2a3eca
	
	
	==> coredns [aee1ac97e303] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3991166105974658486.1379451270861740903. HINFO: read udp 10.244.0.3:35130->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3991166105974658486.1379451270861740903. HINFO: read udp 10.244.0.3:46214->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3991166105974658486.1379451270861740903. HINFO: read udp 10.244.0.3:41699->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3991166105974658486.1379451270861740903. HINFO: read udp 10.244.0.3:55498->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3991166105974658486.1379451270861740903. HINFO: read udp 10.244.0.3:43019->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3991166105974658486.1379451270861740903. HINFO: read udp 10.244.0.3:33213->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c9179bee06cd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:45471->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:55584->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:43728->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:36419->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:48994->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:43568->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:43154->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:34957->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:56740->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5954427096046086633.4343134708420828659. HINFO: read udp 10.244.0.2:33111->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [daa93b53bcc9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8715381490353709723.189574977594217125. HINFO: read udp 10.244.0.3:52956->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8715381490353709723.189574977594217125. HINFO: read udp 10.244.0.3:46896->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8715381490353709723.189574977594217125. HINFO: read udp 10.244.0.3:42927->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8715381490353709723.189574977594217125. HINFO: read udp 10.244.0.3:60605->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8715381490353709723.189574977594217125. HINFO: read udp 10.244.0.3:60252->10.0.2.3:53: i/o timeout
	
	
	==> coredns [fa2e62674a09] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5394117483661801176.8294759175610403781. HINFO: read udp 10.244.0.2:51514->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5394117483661801176.8294759175610403781. HINFO: read udp 10.244.0.2:49019->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5394117483661801176.8294759175610403781. HINFO: read udp 10.244.0.2:36429->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5394117483661801176.8294759175610403781. HINFO: read udp 10.244.0.2:50114->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5394117483661801176.8294759175610403781. HINFO: read udp 10.244.0.2:34593->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-993000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-993000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=running-upgrade-993000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T03_40_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 11:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-993000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 11:44:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 11:40:37 +0000   Mon, 16 Dec 2024 11:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 11:40:37 +0000   Mon, 16 Dec 2024 11:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 11:40:37 +0000   Mon, 16 Dec 2024 11:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 11:40:37 +0000   Mon, 16 Dec 2024 11:40:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-993000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148872Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd576bb5fc2c437492a085fcab6f0cab
	  System UUID:                dd576bb5fc2c437492a085fcab6f0cab
	  Boot ID:                    233a5eda-08ef-45b1-a5a2-57cfbbacbdf2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5dtl4                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 coredns-6d4b75cb6d-qgmmp                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 etcd-running-upgrade-993000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-993000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-993000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-bvzbx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-993000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-993000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-993000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x4 over 4m26s)  kubelet          Node running-upgrade-993000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node running-upgrade-993000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node running-upgrade-993000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node running-upgrade-993000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m21s                  kubelet          Node running-upgrade-993000 status is now: NodeReady
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s                   node-controller  Node running-upgrade-993000 event: Registered Node running-upgrade-993000 in Controller
	
	
	==> dmesg <==
	[  +0.078272] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.076532] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.140129] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.084448] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.080464] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.668700] systemd-fstab-generator[1290]: Ignoring "noauto" for root device
	[  +8.145260] systemd-fstab-generator[1921]: Ignoring "noauto" for root device
	[ +13.716577] kauditd_printk_skb: 47 callbacks suppressed
	[Dec16 11:36] systemd-fstab-generator[2675]: Ignoring "noauto" for root device
	[  +0.208423] systemd-fstab-generator[2716]: Ignoring "noauto" for root device
	[  +0.102170] systemd-fstab-generator[2727]: Ignoring "noauto" for root device
	[  +0.112963] systemd-fstab-generator[2740]: Ignoring "noauto" for root device
	[  +5.005152] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.371605] systemd-fstab-generator[4086]: Ignoring "noauto" for root device
	[  +0.090064] systemd-fstab-generator[4098]: Ignoring "noauto" for root device
	[  +0.081141] systemd-fstab-generator[4109]: Ignoring "noauto" for root device
	[  +0.097285] systemd-fstab-generator[4123]: Ignoring "noauto" for root device
	[  +2.600542] systemd-fstab-generator[4393]: Ignoring "noauto" for root device
	[  +2.640417] systemd-fstab-generator[4758]: Ignoring "noauto" for root device
	[  +1.025959] systemd-fstab-generator[4882]: Ignoring "noauto" for root device
	[  +2.786266] kauditd_printk_skb: 80 callbacks suppressed
	[ +15.611956] kauditd_printk_skb: 3 callbacks suppressed
	[Dec16 11:40] systemd-fstab-generator[13641]: Ignoring "noauto" for root device
	[  +5.629766] systemd-fstab-generator[14255]: Ignoring "noauto" for root device
	[  +0.491527] systemd-fstab-generator[14390]: Ignoring "noauto" for root device
	
	
	==> etcd [da9d31681c48] <==
	{"level":"info","ts":"2024-12-16T11:40:33.158Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T11:40:33.158Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T11:40:33.159Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T11:40:33.159Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-16T11:40:33.159Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-16T11:40:33.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-12-16T11:40:33.159Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-16T11:40:34.054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-16T11:40:34.054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-16T11:40:34.054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-16T11:40:34.054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-16T11:40:34.054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-16T11:40:34.054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-16T11:40:34.054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-16T11:40:34.054Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T11:40:34.055Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T11:40:34.055Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T11:40:34.055Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T11:40:34.055Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-993000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T11:40:34.055Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T11:40:34.056Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T11:40:34.055Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T11:40:34.061Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-12-16T11:40:34.055Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T11:40:34.063Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:44:58 up 9 min,  0 users,  load average: 0.28, 0.24, 0.16
	Linux running-upgrade-993000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [565581d1ca75] <==
	I1216 11:40:35.279193       1 cache.go:39] Caches are synced for autoregister controller
	I1216 11:40:35.279838       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1216 11:40:35.280726       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 11:40:35.280837       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1216 11:40:35.282350       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 11:40:35.282443       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1216 11:40:35.325694       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1216 11:40:36.015922       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1216 11:40:36.185549       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 11:40:36.186701       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 11:40:36.186711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 11:40:36.315340       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 11:40:36.324769       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 11:40:36.360004       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1216 11:40:36.362331       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1216 11:40:36.362670       1 controller.go:611] quota admission added evaluator for: endpoints
	I1216 11:40:36.363835       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 11:40:37.336415       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1216 11:40:37.680400       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1216 11:40:37.685264       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1216 11:40:37.689690       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1216 11:40:37.739882       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 11:40:50.540488       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1216 11:40:50.890973       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1216 11:40:51.639623       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [ecb76521f41a] <==
	I1216 11:40:50.047245       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1216 11:40:50.047743       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1216 11:40:50.047934       1 event.go:294] "Event occurred" object="running-upgrade-993000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-993000 event: Registered Node running-upgrade-993000 in Controller"
	I1216 11:40:50.050895       1 shared_informer.go:262] Caches are synced for endpoint
	I1216 11:40:50.086355       1 shared_informer.go:262] Caches are synced for expand
	I1216 11:40:50.088327       1 shared_informer.go:262] Caches are synced for cronjob
	I1216 11:40:50.090547       1 shared_informer.go:262] Caches are synced for stateful set
	I1216 11:40:50.092373       1 shared_informer.go:262] Caches are synced for PV protection
	I1216 11:40:50.141586       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1216 11:40:50.149684       1 shared_informer.go:262] Caches are synced for attach detach
	I1216 11:40:50.154824       1 shared_informer.go:262] Caches are synced for disruption
	I1216 11:40:50.154854       1 disruption.go:371] Sending events to api server.
	I1216 11:40:50.185961       1 shared_informer.go:262] Caches are synced for deployment
	I1216 11:40:50.286557       1 shared_informer.go:262] Caches are synced for HPA
	I1216 11:40:50.291661       1 shared_informer.go:262] Caches are synced for resource quota
	I1216 11:40:50.292032       1 shared_informer.go:262] Caches are synced for resource quota
	I1216 11:40:50.335825       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1216 11:40:50.338066       1 shared_informer.go:262] Caches are synced for crt configmap
	I1216 11:40:50.543033       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bvzbx"
	I1216 11:40:50.707312       1 shared_informer.go:262] Caches are synced for garbage collector
	I1216 11:40:50.736444       1 shared_informer.go:262] Caches are synced for garbage collector
	I1216 11:40:50.736455       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1216 11:40:50.892062       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1216 11:40:51.093399       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5dtl4"
	I1216 11:40:51.096659       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-qgmmp"
	
	
	==> kube-proxy [20c87152ba22] <==
	I1216 11:40:51.626170       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1216 11:40:51.626289       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1216 11:40:51.626336       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1216 11:40:51.637679       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1216 11:40:51.637690       1 server_others.go:206] "Using iptables Proxier"
	I1216 11:40:51.637702       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1216 11:40:51.637800       1 server.go:661] "Version info" version="v1.24.1"
	I1216 11:40:51.637808       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 11:40:51.638081       1 config.go:317] "Starting service config controller"
	I1216 11:40:51.638091       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1216 11:40:51.638098       1 config.go:226] "Starting endpoint slice config controller"
	I1216 11:40:51.638120       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1216 11:40:51.638381       1 config.go:444] "Starting node config controller"
	I1216 11:40:51.638412       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1216 11:40:51.738680       1 shared_informer.go:262] Caches are synced for node config
	I1216 11:40:51.738684       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1216 11:40:51.738693       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [9587b1d976d2] <==
	W1216 11:40:35.251687       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 11:40:35.251698       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1216 11:40:35.251751       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 11:40:35.251763       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1216 11:40:35.251776       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 11:40:35.251807       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1216 11:40:35.251852       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 11:40:35.251860       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1216 11:40:35.251873       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 11:40:35.251875       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1216 11:40:35.251923       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 11:40:35.251931       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1216 11:40:36.130566       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 11:40:36.130689       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1216 11:40:36.148833       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 11:40:36.148882       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1216 11:40:36.159553       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 11:40:36.159589       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1216 11:40:36.181598       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 11:40:36.181607       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1216 11:40:36.236600       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 11:40:36.236722       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1216 11:40:36.252759       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 11:40:36.252838       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1216 11:40:36.444222       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-12-16 11:35:20 UTC, ends at Mon 2024-12-16 11:44:58 UTC. --
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: I1216 11:40:50.224376   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4-tmp\") pod \"storage-provisioner\" (UID: \"b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4\") " pod="kube-system/storage-provisioner"
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.329255   14261 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.329274   14261 projected.go:192] Error preparing data for projected volume kube-api-access-p6vsf for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.329313   14261 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4-kube-api-access-p6vsf podName:b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4 nodeName:}" failed. No retries permitted until 2024-12-16 11:40:50.829297266 +0000 UTC m=+13.160467986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6vsf" (UniqueName: "kubernetes.io/projected/b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4-kube-api-access-p6vsf") pod "storage-provisioner" (UID: "b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4") : configmap "kube-root-ca.crt" not found
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: I1216 11:40:50.544941   14261 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: I1216 11:40:50.731115   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jd5l\" (UniqueName: \"kubernetes.io/projected/da8e03ad-291e-4a86-8179-4828057db9fe-kube-api-access-9jd5l\") pod \"kube-proxy-bvzbx\" (UID: \"da8e03ad-291e-4a86-8179-4828057db9fe\") " pod="kube-system/kube-proxy-bvzbx"
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: I1216 11:40:50.731232   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da8e03ad-291e-4a86-8179-4828057db9fe-kube-proxy\") pod \"kube-proxy-bvzbx\" (UID: \"da8e03ad-291e-4a86-8179-4828057db9fe\") " pod="kube-system/kube-proxy-bvzbx"
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: I1216 11:40:50.731254   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da8e03ad-291e-4a86-8179-4828057db9fe-lib-modules\") pod \"kube-proxy-bvzbx\" (UID: \"da8e03ad-291e-4a86-8179-4828057db9fe\") " pod="kube-system/kube-proxy-bvzbx"
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: I1216 11:40:50.731276   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da8e03ad-291e-4a86-8179-4828057db9fe-xtables-lock\") pod \"kube-proxy-bvzbx\" (UID: \"da8e03ad-291e-4a86-8179-4828057db9fe\") " pod="kube-system/kube-proxy-bvzbx"
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.832647   14261 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.832669   14261 projected.go:192] Error preparing data for projected volume kube-api-access-p6vsf for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.832693   14261 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4-kube-api-access-p6vsf podName:b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4 nodeName:}" failed. No retries permitted until 2024-12-16 11:40:51.832682238 +0000 UTC m=+14.163852959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6vsf" (UniqueName: "kubernetes.io/projected/b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4-kube-api-access-p6vsf") pod "storage-provisioner" (UID: "b674de3a-6ef5-4b8f-bf69-8c6c0b175bb4") : configmap "kube-root-ca.crt" not found
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.842473   14261 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.842487   14261 projected.go:192] Error preparing data for projected volume kube-api-access-9jd5l for pod kube-system/kube-proxy-bvzbx: configmap "kube-root-ca.crt" not found
	Dec 16 11:40:50 running-upgrade-993000 kubelet[14261]: E1216 11:40:50.842522   14261 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/da8e03ad-291e-4a86-8179-4828057db9fe-kube-api-access-9jd5l podName:da8e03ad-291e-4a86-8179-4828057db9fe nodeName:}" failed. No retries permitted until 2024-12-16 11:40:51.342509308 +0000 UTC m=+13.673680028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9jd5l" (UniqueName: "kubernetes.io/projected/da8e03ad-291e-4a86-8179-4828057db9fe-kube-api-access-9jd5l") pod "kube-proxy-bvzbx" (UID: "da8e03ad-291e-4a86-8179-4828057db9fe") : configmap "kube-root-ca.crt" not found
	Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: I1216 11:40:51.096590   14261 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: W1216 11:40:51.098785   14261 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: E1216 11:40:51.098840   14261 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-993000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-993000' and this object
	Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: I1216 11:40:51.101289   14261 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: I1216 11:40:51.240518   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60595b09-58d4-4729-a096-9ee7e8dc36cd-config-volume\") pod \"coredns-6d4b75cb6d-5dtl4\" (UID: \"60595b09-58d4-4729-a096-9ee7e8dc36cd\") " pod="kube-system/coredns-6d4b75cb6d-5dtl4"
	Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: I1216 11:40:51.240566   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6z6\" (UniqueName: \"kubernetes.io/projected/60595b09-58d4-4729-a096-9ee7e8dc36cd-kube-api-access-8d6z6\") pod \"coredns-6d4b75cb6d-5dtl4\" (UID: \"60595b09-58d4-4729-a096-9ee7e8dc36cd\") " pod="kube-system/coredns-6d4b75cb6d-5dtl4"
	Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: I1216 11:40:51.240578   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdv5\" (UniqueName: \"kubernetes.io/projected/b14b0003-8b83-4b31-b87d-28ef95334727-kube-api-access-gjdv5\") pod \"coredns-6d4b75cb6d-qgmmp\" (UID: \"b14b0003-8b83-4b31-b87d-28ef95334727\") " pod="kube-system/coredns-6d4b75cb6d-qgmmp"
	Dec 16 11:40:51 running-upgrade-993000 kubelet[14261]: I1216 11:40:51.240588   14261 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b14b0003-8b83-4b31-b87d-28ef95334727-config-volume\") pod \"coredns-6d4b75cb6d-qgmmp\" (UID: \"b14b0003-8b83-4b31-b87d-28ef95334727\") " pod="kube-system/coredns-6d4b75cb6d-qgmmp"
	Dec 16 11:44:39 running-upgrade-993000 kubelet[14261]: I1216 11:44:39.971774   14261 scope.go:110] "RemoveContainer" containerID="97afd2e0fbcd3cd04753ed1baa71f4d90203125ef5dd649d2b2618c32eab356d"
	Dec 16 11:44:39 running-upgrade-993000 kubelet[14261]: I1216 11:44:39.987766   14261 scope.go:110] "RemoveContainer" containerID="921c9f899dadb6ed093a2e6d5a6f4a1a99a0eeabe17a0d60a47641e619cb5da5"
	
	
	==> storage-provisioner [2f0388bb5160] <==
	I1216 11:40:52.052516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 11:40:52.056718       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 11:40:52.056735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 11:40:52.060739       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 11:40:52.060795       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-993000_8c20e23e-24c5-424f-816c-e98da7fbe01e!
	I1216 11:40:52.062158       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"33bdeca8-9461-4ce3-a9ae-95b650fbd6b6", APIVersion:"v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-993000_8c20e23e-24c5-424f-816c-e98da7fbe01e became leader
	I1216 11:40:52.161658       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-993000_8c20e23e-24c5-424f-816c-e98da7fbe01e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-993000 -n running-upgrade-993000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-993000 -n running-upgrade-993000: exit status 2 (15.706287792s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-993000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-993000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-993000
--- FAIL: TestRunningBinaryUpgrade (629.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-939000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-939000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.953991958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-939000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-939000" primary control-plane node in "kubernetes-upgrade-939000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-939000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:34:26.271883    9279 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:34:26.272050    9279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:34:26.272053    9279 out.go:358] Setting ErrFile to fd 2...
	I1216 03:34:26.272055    9279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:34:26.272206    9279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:34:26.273370    9279 out.go:352] Setting JSON to false
	I1216 03:34:26.291280    9279 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5637,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:34:26.291355    9279 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:34:26.296957    9279 out.go:177] * [kubernetes-upgrade-939000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:34:26.314988    9279 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:34:26.315036    9279 notify.go:220] Checking for updates...
	I1216 03:34:26.322830    9279 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:34:26.326929    9279 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:34:26.329998    9279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:34:26.332927    9279 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:34:26.335926    9279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:34:26.339263    9279 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:34:26.339345    9279 config.go:182] Loaded profile config "offline-docker-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:34:26.339396    9279 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:34:26.342941    9279 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:34:26.349951    9279 start.go:297] selected driver: qemu2
	I1216 03:34:26.349962    9279 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:34:26.349972    9279 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:34:26.352824    9279 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:34:26.354215    9279 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:34:26.357926    9279 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 03:34:26.357939    9279 cni.go:84] Creating CNI manager for ""
	I1216 03:34:26.357962    9279 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 03:34:26.357984    9279 start.go:340] cluster config:
	{Name:kubernetes-upgrade-939000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:34:26.363247    9279 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:34:26.371838    9279 out.go:177] * Starting "kubernetes-upgrade-939000" primary control-plane node in "kubernetes-upgrade-939000" cluster
	I1216 03:34:26.375993    9279 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:34:26.376011    9279 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 03:34:26.376032    9279 cache.go:56] Caching tarball of preloaded images
	I1216 03:34:26.376117    9279 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:34:26.376126    9279 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 03:34:26.376203    9279 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/kubernetes-upgrade-939000/config.json ...
	I1216 03:34:26.376216    9279 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/kubernetes-upgrade-939000/config.json: {Name:mk13c4d0d430813533320fac369d6959261fb6b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:34:26.376776    9279 start.go:360] acquireMachinesLock for kubernetes-upgrade-939000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:34:26.376833    9279 start.go:364] duration metric: took 49.542µs to acquireMachinesLock for "kubernetes-upgrade-939000"
	I1216 03:34:26.376847    9279 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:34:26.376878    9279 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:34:26.380930    9279 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:34:26.399554    9279 start.go:159] libmachine.API.Create for "kubernetes-upgrade-939000" (driver="qemu2")
	I1216 03:34:26.399584    9279 client.go:168] LocalClient.Create starting
	I1216 03:34:26.399662    9279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:34:26.399705    9279 main.go:141] libmachine: Decoding PEM data...
	I1216 03:34:26.399717    9279 main.go:141] libmachine: Parsing certificate...
	I1216 03:34:26.399755    9279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:34:26.399789    9279 main.go:141] libmachine: Decoding PEM data...
	I1216 03:34:26.399801    9279 main.go:141] libmachine: Parsing certificate...
	I1216 03:34:26.400274    9279 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:34:26.562350    9279 main.go:141] libmachine: Creating SSH key...
	I1216 03:34:26.772110    9279 main.go:141] libmachine: Creating Disk image...
	I1216 03:34:26.772118    9279 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:34:26.772384    9279 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2
	I1216 03:34:26.782868    9279 main.go:141] libmachine: STDOUT: 
	I1216 03:34:26.782890    9279 main.go:141] libmachine: STDERR: 
	I1216 03:34:26.782950    9279 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2 +20000M
	I1216 03:34:26.791483    9279 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:34:26.791495    9279 main.go:141] libmachine: STDERR: 
	I1216 03:34:26.791517    9279 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2
	I1216 03:34:26.791522    9279 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:34:26.791536    9279 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:34:26.791568    9279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d0:eb:fd:18:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2
	I1216 03:34:26.793433    9279 main.go:141] libmachine: STDOUT: 
	I1216 03:34:26.793450    9279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:34:26.793476    9279 client.go:171] duration metric: took 393.891834ms to LocalClient.Create
	I1216 03:34:28.795621    9279 start.go:128] duration metric: took 2.418754167s to createHost
	I1216 03:34:28.795676    9279 start.go:83] releasing machines lock for "kubernetes-upgrade-939000", held for 2.418862875s
	W1216 03:34:28.795728    9279 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:28.805228    9279 out.go:177] * Deleting "kubernetes-upgrade-939000" in qemu2 ...
	W1216 03:34:28.846391    9279 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:28.846422    9279 start.go:729] Will try again in 5 seconds ...
	I1216 03:34:33.845234    9279 start.go:360] acquireMachinesLock for kubernetes-upgrade-939000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:34:33.845425    9279 start.go:364] duration metric: took 154.167µs to acquireMachinesLock for "kubernetes-upgrade-939000"
	I1216 03:34:33.845456    9279 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:34:33.845516    9279 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:34:33.855828    9279 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:34:33.877140    9279 start.go:159] libmachine.API.Create for "kubernetes-upgrade-939000" (driver="qemu2")
	I1216 03:34:33.877171    9279 client.go:168] LocalClient.Create starting
	I1216 03:34:33.877256    9279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:34:33.877309    9279 main.go:141] libmachine: Decoding PEM data...
	I1216 03:34:33.877320    9279 main.go:141] libmachine: Parsing certificate...
	I1216 03:34:33.877356    9279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:34:33.877396    9279 main.go:141] libmachine: Decoding PEM data...
	I1216 03:34:33.877404    9279 main.go:141] libmachine: Parsing certificate...
	I1216 03:34:33.878002    9279 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:34:34.039421    9279 main.go:141] libmachine: Creating SSH key...
	I1216 03:34:34.127534    9279 main.go:141] libmachine: Creating Disk image...
	I1216 03:34:34.127545    9279 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:34:34.127777    9279 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2
	I1216 03:34:34.138062    9279 main.go:141] libmachine: STDOUT: 
	I1216 03:34:34.138077    9279 main.go:141] libmachine: STDERR: 
	I1216 03:34:34.138143    9279 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2 +20000M
	I1216 03:34:34.146772    9279 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:34:34.146787    9279 main.go:141] libmachine: STDERR: 
	I1216 03:34:34.146798    9279 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2
	I1216 03:34:34.146813    9279 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:34:34.146823    9279 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:34:34.146849    9279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:75:3f:68:23:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2
	I1216 03:34:34.148754    9279 main.go:141] libmachine: STDOUT: 
	I1216 03:34:34.148770    9279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:34:34.148792    9279 client.go:171] duration metric: took 271.823916ms to LocalClient.Create
	I1216 03:34:36.149555    9279 start.go:128] duration metric: took 2.305665542s to createHost
	I1216 03:34:36.149610    9279 start.go:83] releasing machines lock for "kubernetes-upgrade-939000", held for 2.305831959s
	W1216 03:34:36.150020    9279 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:36.160547    9279 out.go:201] 
	W1216 03:34:36.164738    9279 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:34:36.164761    9279 out.go:270] * 
	* 
	W1216 03:34:36.167241    9279 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:34:36.176566    9279 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-939000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-939000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-939000: (3.772173667s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-939000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-939000 status --format={{.Host}}: exit status 7 (68.885583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-939000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-939000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.218041291s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-939000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-939000" primary control-plane node in "kubernetes-upgrade-939000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-939000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-939000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:34:40.065302    9329 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:34:40.065466    9329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:34:40.065469    9329 out.go:358] Setting ErrFile to fd 2...
	I1216 03:34:40.065472    9329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:34:40.065590    9329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:34:40.066699    9329 out.go:352] Setting JSON to false
	I1216 03:34:40.084777    9329 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5651,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:34:40.084877    9329 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:34:40.090171    9329 out.go:177] * [kubernetes-upgrade-939000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:34:40.098205    9329 notify.go:220] Checking for updates...
	I1216 03:34:40.103066    9329 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:34:40.109116    9329 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:34:40.115137    9329 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:34:40.123042    9329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:34:40.128592    9329 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:34:40.136109    9329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:34:40.139388    9329 config.go:182] Loaded profile config "kubernetes-upgrade-939000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1216 03:34:40.139658    9329 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:34:40.143087    9329 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:34:40.150095    9329 start.go:297] selected driver: qemu2
	I1216 03:34:40.150101    9329 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:34:40.150147    9329 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:34:40.152624    9329 cni.go:84] Creating CNI manager for ""
	I1216 03:34:40.152659    9329 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:34:40.152681    9329 start.go:340] cluster config:
	{Name:kubernetes-upgrade-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-939000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:34:40.157002    9329 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:34:40.163889    9329 out.go:177] * Starting "kubernetes-upgrade-939000" primary control-plane node in "kubernetes-upgrade-939000" cluster
	I1216 03:34:40.167073    9329 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:34:40.167088    9329 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:34:40.167095    9329 cache.go:56] Caching tarball of preloaded images
	I1216 03:34:40.167171    9329 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:34:40.167178    9329 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:34:40.167260    9329 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/kubernetes-upgrade-939000/config.json ...
	I1216 03:34:40.167646    9329 start.go:360] acquireMachinesLock for kubernetes-upgrade-939000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:34:40.167678    9329 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "kubernetes-upgrade-939000"
	I1216 03:34:40.167687    9329 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:34:40.167693    9329 fix.go:54] fixHost starting: 
	I1216 03:34:40.167827    9329 fix.go:112] recreateIfNeeded on kubernetes-upgrade-939000: state=Stopped err=<nil>
	W1216 03:34:40.167838    9329 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:34:40.175051    9329 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-939000" ...
	I1216 03:34:40.179126    9329 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:34:40.179173    9329 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:75:3f:68:23:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2
	I1216 03:34:40.181365    9329 main.go:141] libmachine: STDOUT: 
	I1216 03:34:40.181387    9329 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:34:40.181420    9329 fix.go:56] duration metric: took 13.732667ms for fixHost
	I1216 03:34:40.181425    9329 start.go:83] releasing machines lock for "kubernetes-upgrade-939000", held for 13.748959ms
	W1216 03:34:40.181432    9329 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:34:40.181470    9329 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:40.181474    9329 start.go:729] Will try again in 5 seconds ...
	I1216 03:34:45.181477    9329 start.go:360] acquireMachinesLock for kubernetes-upgrade-939000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:34:45.181841    9329 start.go:364] duration metric: took 300µs to acquireMachinesLock for "kubernetes-upgrade-939000"
	I1216 03:34:45.181970    9329 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:34:45.182010    9329 fix.go:54] fixHost starting: 
	I1216 03:34:45.182688    9329 fix.go:112] recreateIfNeeded on kubernetes-upgrade-939000: state=Stopped err=<nil>
	W1216 03:34:45.182714    9329 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:34:45.192203    9329 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-939000" ...
	I1216 03:34:45.200319    9329 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:34:45.200550    9329 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:75:3f:68:23:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubernetes-upgrade-939000/disk.qcow2
	I1216 03:34:45.211103    9329 main.go:141] libmachine: STDOUT: 
	I1216 03:34:45.211164    9329 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:34:45.211280    9329 fix.go:56] duration metric: took 29.300959ms for fixHost
	I1216 03:34:45.211297    9329 start.go:83] releasing machines lock for "kubernetes-upgrade-939000", held for 29.418333ms
	W1216 03:34:45.211504    9329 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-939000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-939000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:34:45.219204    9329 out.go:201] 
	W1216 03:34:45.223361    9329 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:34:45.223435    9329 out.go:270] * 
	* 
	W1216 03:34:45.226015    9329 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:34:45.235121    9329 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-939000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-939000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-939000 version --output=json: exit status 1 (62.654458ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-939000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-16 03:34:45.311427 -0800 PST m=+675.096261668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-939000 -n kubernetes-upgrade-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-939000 -n kubernetes-upgrade-939000: exit status 7 (38.923292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-939000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-939000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-939000
--- FAIL: TestKubernetesUpgrade (19.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (585.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1203345819 start -p stopped-upgrade-873000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1203345819 start -p stopped-upgrade-873000 --memory=2200 --vm-driver=qemu2 : (52.568154083s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1203345819 -p stopped-upgrade-873000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1203345819 -p stopped-upgrade-873000 stop: (12.105927541s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-873000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-873000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.8411015s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-873000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-873000" primary control-plane node in "stopped-upgrade-873000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-873000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:35:39.864913    9380 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:35:39.865456    9380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:35:39.865461    9380 out.go:358] Setting ErrFile to fd 2...
	I1216 03:35:39.865464    9380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:35:39.865651    9380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:35:39.867477    9380 out.go:352] Setting JSON to false
	I1216 03:35:39.887138    9380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5710,"bootTime":1734343229,"procs":576,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:35:39.887348    9380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:35:39.893048    9380 out.go:177] * [stopped-upgrade-873000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:35:39.900469    9380 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:35:39.900783    9380 notify.go:220] Checking for updates...
	I1216 03:35:39.909022    9380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:35:39.911995    9380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:35:39.916010    9380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:35:39.919027    9380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:35:39.921929    9380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:35:39.925306    9380 config.go:182] Loaded profile config "stopped-upgrade-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 03:35:39.928994    9380 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1216 03:35:39.930770    9380 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:35:39.934044    9380 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:35:39.940894    9380 start.go:297] selected driver: qemu2
	I1216 03:35:39.940911    9380 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61010 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 03:35:39.940965    9380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:35:39.944072    9380 cni.go:84] Creating CNI manager for ""
	I1216 03:35:39.944122    9380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:35:39.944283    9380 start.go:340] cluster config:
	{Name:stopped-upgrade-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61010 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 03:35:39.944359    9380 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:35:39.953082    9380 out.go:177] * Starting "stopped-upgrade-873000" primary control-plane node in "stopped-upgrade-873000" cluster
	I1216 03:35:39.956980    9380 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 03:35:39.956997    9380 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1216 03:35:39.957023    9380 cache.go:56] Caching tarball of preloaded images
	I1216 03:35:39.957091    9380 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:35:39.957096    9380 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1216 03:35:39.957159    9380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/config.json ...
	I1216 03:35:39.957689    9380 start.go:360] acquireMachinesLock for stopped-upgrade-873000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:35:39.957717    9380 start.go:364] duration metric: took 22.584µs to acquireMachinesLock for "stopped-upgrade-873000"
	I1216 03:35:39.957724    9380 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:35:39.957747    9380 fix.go:54] fixHost starting: 
	I1216 03:35:39.957862    9380 fix.go:112] recreateIfNeeded on stopped-upgrade-873000: state=Stopped err=<nil>
	W1216 03:35:39.957868    9380 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:35:39.962047    9380 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-873000" ...
	I1216 03:35:39.970138    9380 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:35:39.970231    9380 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/qemu.pid -nic user,model=virtio,hostfwd=tcp::60975-:22,hostfwd=tcp::60976-:2376,hostname=stopped-upgrade-873000 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/disk.qcow2
	I1216 03:35:40.016569    9380 main.go:141] libmachine: STDOUT: 
	I1216 03:35:40.016595    9380 main.go:141] libmachine: STDERR: 
	I1216 03:35:40.016601    9380 main.go:141] libmachine: Waiting for VM to start (ssh -p 60975 docker@127.0.0.1)...
	I1216 03:35:59.223039    9380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/config.json ...
	I1216 03:35:59.223325    9380 machine.go:93] provisionDockerMachine start ...
	I1216 03:35:59.223422    9380 main.go:141] libmachine: Using SSH client type: native
	I1216 03:35:59.223590    9380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009731b0] 0x1009759f0 <nil>  [] 0s} localhost 60975 <nil> <nil>}
	I1216 03:35:59.223595    9380 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 03:35:59.291721    9380 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 03:35:59.291749    9380 buildroot.go:166] provisioning hostname "stopped-upgrade-873000"
	I1216 03:35:59.291833    9380 main.go:141] libmachine: Using SSH client type: native
	I1216 03:35:59.291953    9380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009731b0] 0x1009759f0 <nil>  [] 0s} localhost 60975 <nil> <nil>}
	I1216 03:35:59.291960    9380 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-873000 && echo "stopped-upgrade-873000" | sudo tee /etc/hostname
	I1216 03:35:59.364866    9380 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-873000
	
	I1216 03:35:59.364941    9380 main.go:141] libmachine: Using SSH client type: native
	I1216 03:35:59.365055    9380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009731b0] 0x1009759f0 <nil>  [] 0s} localhost 60975 <nil> <nil>}
	I1216 03:35:59.365064    9380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-873000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-873000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-873000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:35:59.436917    9380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:35:59.436932    9380 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20107-6737/.minikube CaCertPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20107-6737/.minikube}
	I1216 03:35:59.436940    9380 buildroot.go:174] setting up certificates
	I1216 03:35:59.436945    9380 provision.go:84] configureAuth start
	I1216 03:35:59.436966    9380 provision.go:143] copyHostCerts
	I1216 03:35:59.437078    9380 exec_runner.go:144] found /Users/jenkins/minikube-integration/20107-6737/.minikube/cert.pem, removing ...
	I1216 03:35:59.437095    9380 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20107-6737/.minikube/cert.pem
	I1216 03:35:59.437207    9380 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20107-6737/.minikube/cert.pem (1123 bytes)
	I1216 03:35:59.437385    9380 exec_runner.go:144] found /Users/jenkins/minikube-integration/20107-6737/.minikube/key.pem, removing ...
	I1216 03:35:59.437389    9380 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20107-6737/.minikube/key.pem
	I1216 03:35:59.437442    9380 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20107-6737/.minikube/key.pem (1679 bytes)
	I1216 03:35:59.437558    9380 exec_runner.go:144] found /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.pem, removing ...
	I1216 03:35:59.437568    9380 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.pem
	I1216 03:35:59.437624    9380 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.pem (1082 bytes)
	I1216 03:35:59.437727    9380 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-873000 san=[127.0.0.1 localhost minikube stopped-upgrade-873000]
	I1216 03:35:59.512050    9380 provision.go:177] copyRemoteCerts
	I1216 03:35:59.512344    9380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:35:59.512367    9380 sshutil.go:53] new ssh client: &{IP:localhost Port:60975 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/id_rsa Username:docker}
	I1216 03:35:59.549994    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:35:59.557506    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 03:35:59.564795    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 03:35:59.572064    9380 provision.go:87] duration metric: took 135.089959ms to configureAuth
	I1216 03:35:59.572073    9380 buildroot.go:189] setting minikube options for container-runtime
	I1216 03:35:59.572186    9380 config.go:182] Loaded profile config "stopped-upgrade-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 03:35:59.572242    9380 main.go:141] libmachine: Using SSH client type: native
	I1216 03:35:59.572345    9380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009731b0] 0x1009759f0 <nil>  [] 0s} localhost 60975 <nil> <nil>}
	I1216 03:35:59.572350    9380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 03:35:59.640651    9380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1216 03:35:59.640661    9380 buildroot.go:70] root file system type: tmpfs
	I1216 03:35:59.640718    9380 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 03:35:59.640780    9380 main.go:141] libmachine: Using SSH client type: native
	I1216 03:35:59.640889    9380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009731b0] 0x1009759f0 <nil>  [] 0s} localhost 60975 <nil> <nil>}
	I1216 03:35:59.640923    9380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 03:35:59.712512    9380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 03:35:59.712894    9380 main.go:141] libmachine: Using SSH client type: native
	I1216 03:35:59.713020    9380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009731b0] 0x1009759f0 <nil>  [] 0s} localhost 60975 <nil> <nil>}
	I1216 03:35:59.713029    9380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 03:36:00.114823    9380 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1216 03:36:00.114838    9380 machine.go:96] duration metric: took 891.52275ms to provisionDockerMachine
	I1216 03:36:00.114845    9380 start.go:293] postStartSetup for "stopped-upgrade-873000" (driver="qemu2")
	I1216 03:36:00.114852    9380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:36:00.114947    9380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:36:00.114957    9380 sshutil.go:53] new ssh client: &{IP:localhost Port:60975 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/id_rsa Username:docker}
	I1216 03:36:00.154864    9380 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:36:00.156068    9380 info.go:137] Remote host: Buildroot 2021.02.12
	I1216 03:36:00.156076    9380 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20107-6737/.minikube/addons for local assets ...
	I1216 03:36:00.156160    9380 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20107-6737/.minikube/files for local assets ...
	I1216 03:36:00.156296    9380 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/ssl/certs/72562.pem -> 72562.pem in /etc/ssl/certs
	I1216 03:36:00.156457    9380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:36:00.159163    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/ssl/certs/72562.pem --> /etc/ssl/certs/72562.pem (1708 bytes)
	I1216 03:36:00.165928    9380 start.go:296] duration metric: took 51.072708ms for postStartSetup
	I1216 03:36:00.165943    9380 fix.go:56] duration metric: took 20.208708417s for fixHost
	I1216 03:36:00.165989    9380 main.go:141] libmachine: Using SSH client type: native
	I1216 03:36:00.166095    9380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009731b0] 0x1009759f0 <nil>  [] 0s} localhost 60975 <nil> <nil>}
	I1216 03:36:00.166100    9380 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 03:36:00.236347    9380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734348960.570892420
	
	I1216 03:36:00.236357    9380 fix.go:216] guest clock: 1734348960.570892420
	I1216 03:36:00.236361    9380 fix.go:229] Guest: 2024-12-16 03:36:00.57089242 -0800 PST Remote: 2024-12-16 03:36:00.165945 -0800 PST m=+20.413945251 (delta=404.94742ms)
	I1216 03:36:00.236375    9380 fix.go:200] guest clock delta is within tolerance: 404.94742ms
	I1216 03:36:00.236378    9380 start.go:83] releasing machines lock for "stopped-upgrade-873000", held for 20.279150916s
	I1216 03:36:00.236476    9380 ssh_runner.go:195] Run: cat /version.json
	I1216 03:36:00.236490    9380 sshutil.go:53] new ssh client: &{IP:localhost Port:60975 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/id_rsa Username:docker}
	I1216 03:36:00.236476    9380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:36:00.236650    9380 sshutil.go:53] new ssh client: &{IP:localhost Port:60975 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/id_rsa Username:docker}
	W1216 03:36:00.237199    9380 sshutil.go:64] dial failure (will retry): dial tcp [::1]:60975: connect: connection refused
	I1216 03:36:00.237218    9380 retry.go:31] will retry after 276.143279ms: dial tcp [::1]:60975: connect: connection refused
	W1216 03:36:00.272409    9380 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1216 03:36:00.272490    9380 ssh_runner.go:195] Run: systemctl --version
	I1216 03:36:00.274728    9380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:36:00.276253    9380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:36:00.276291    9380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1216 03:36:00.279277    9380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1216 03:36:00.284098    9380 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:36:00.284111    9380 start.go:495] detecting cgroup driver to use...
	I1216 03:36:00.284220    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:36:00.290976    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1216 03:36:00.293951    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 03:36:00.297161    9380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 03:36:00.297196    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 03:36:00.300611    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 03:36:00.303656    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 03:36:00.306578    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 03:36:00.309737    9380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:36:00.313305    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 03:36:00.316659    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 03:36:00.319675    9380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 03:36:00.322584    9380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:36:00.325430    9380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:36:00.328464    9380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:00.409779    9380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 03:36:00.420263    9380 start.go:495] detecting cgroup driver to use...
	I1216 03:36:00.420372    9380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 03:36:00.426434    9380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:36:00.431634    9380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:36:00.443979    9380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:36:00.449200    9380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 03:36:00.454743    9380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 03:36:00.529558    9380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 03:36:00.534688    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:36:00.540590    9380 ssh_runner.go:195] Run: which cri-dockerd
	I1216 03:36:00.542036    9380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 03:36:00.545263    9380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1216 03:36:00.552931    9380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 03:36:00.633494    9380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 03:36:00.712715    9380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 03:36:00.712788    9380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 03:36:00.718389    9380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:00.798284    9380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 03:36:01.930659    9380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.132378917s)
	I1216 03:36:01.930736    9380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 03:36:01.935558    9380 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 03:36:01.941923    9380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 03:36:01.946520    9380 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 03:36:02.024502    9380 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 03:36:02.109250    9380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:02.179820    9380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 03:36:02.185649    9380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 03:36:02.190655    9380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:02.268243    9380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 03:36:02.306718    9380 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 03:36:02.306812    9380 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 03:36:02.308861    9380 start.go:563] Will wait 60s for crictl version
	I1216 03:36:02.308913    9380 ssh_runner.go:195] Run: which crictl
	I1216 03:36:02.310310    9380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 03:36:02.324952    9380 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1216 03:36:02.325045    9380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 03:36:02.341656    9380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 03:36:02.367473    9380 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1216 03:36:02.367648    9380 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1216 03:36:02.368900    9380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:36:02.373260    9380 kubeadm.go:883] updating cluster {Name:stopped-upgrade-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61010 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1216 03:36:02.373342    9380 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 03:36:02.373388    9380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 03:36:02.383320    9380 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 03:36:02.383329    9380 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 03:36:02.383398    9380 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 03:36:02.386449    9380 ssh_runner.go:195] Run: which lz4
	I1216 03:36:02.387748    9380 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 03:36:02.388958    9380 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 03:36:02.388967    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1216 03:36:03.360087    9380 docker.go:653] duration metric: took 972.411375ms to copy over tarball
	I1216 03:36:03.360163    9380 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 03:36:04.540090    9380 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17992725s)
	I1216 03:36:04.540104    9380 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 03:36:04.556134    9380 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 03:36:04.559285    9380 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1216 03:36:04.564717    9380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:04.650005    9380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 03:36:06.198376    9380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548384792s)
	I1216 03:36:06.198678    9380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 03:36:06.211798    9380 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 03:36:06.211819    9380 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 03:36:06.211840    9380 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 03:36:06.218651    9380 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:06.221151    9380 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 03:36:06.223146    9380 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:06.223452    9380 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:06.225460    9380 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:06.225453    9380 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 03:36:06.226636    9380 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:06.227300    9380 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:06.227985    9380 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:06.228206    9380 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:06.229347    9380 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:06.230052    9380 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:06.230520    9380 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:06.231046    9380 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:06.231697    9380 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:06.232402    9380 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:06.750190    9380 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:06.753051    9380 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:06.754769    9380 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1216 03:36:06.763170    9380 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1216 03:36:06.763985    9380 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:06.764093    9380 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1216 03:36:06.768052    9380 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1216 03:36:06.768087    9380 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:06.768144    9380 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 03:36:06.780133    9380 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1216 03:36:06.780166    9380 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1216 03:36:06.780234    9380 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1216 03:36:06.780242    9380 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1216 03:36:06.793328    9380 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1216 03:36:06.793328    9380 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1216 03:36:06.793775    9380 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1216 03:36:06.795398    9380 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1216 03:36:06.795414    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1216 03:36:06.800283    9380 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:06.803477    9380 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1216 03:36:06.803492    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1216 03:36:06.815423    9380 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1216 03:36:06.815482    9380 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:06.815535    9380 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1216 03:36:06.846970    9380 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1216 03:36:06.847007    9380 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1216 03:36:06.847143    9380 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1216 03:36:06.848634    9380 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1216 03:36:06.848647    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1216 03:36:06.936197    9380 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:06.974017    9380 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1216 03:36:06.974042    9380 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:06.974124    9380 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1216 03:36:07.023467    9380 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1216 03:36:07.045574    9380 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:07.080348    9380 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1216 03:36:07.080375    9380 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:07.080440    9380 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1216 03:36:07.086682    9380 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1216 03:36:07.086696    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1216 03:36:07.098301    9380 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W1216 03:36:07.104890    9380 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1216 03:36:07.105045    9380 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:07.231509    9380 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1216 03:36:07.231546    9380 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1216 03:36:07.231567    9380 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:07.231634    9380 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 03:36:07.241308    9380 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1216 03:36:07.241457    9380 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1216 03:36:07.242984    9380 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1216 03:36:07.242995    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1216 03:36:07.282226    9380 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1216 03:36:07.282237    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1216 03:36:07.322292    9380 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1216 03:36:07.428982    9380 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 03:36:07.429126    9380 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:07.442391    9380 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1216 03:36:07.442423    9380 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:07.442495    9380 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:36:07.457473    9380 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 03:36:07.457613    9380 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:36:07.459159    9380 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 03:36:07.459174    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1216 03:36:07.490361    9380 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:36:07.490376    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1216 03:36:07.740379    9380 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 03:36:07.740425    9380 cache_images.go:92] duration metric: took 1.528608125s to LoadCachedImages
	W1216 03:36:07.740650    9380 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1216 03:36:07.740659    9380 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1216 03:36:07.740717    9380 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-873000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:36:07.740790    9380 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 03:36:07.754426    9380 cni.go:84] Creating CNI manager for ""
	I1216 03:36:07.754438    9380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:36:07.754681    9380 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 03:36:07.754696    9380 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-873000 NodeName:stopped-upgrade-873000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:36:07.754759    9380 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-873000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:36:07.754822    9380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1216 03:36:07.757740    9380 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 03:36:07.757785    9380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:36:07.760780    9380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1216 03:36:07.765907    9380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:36:07.771032    9380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1216 03:36:07.776810    9380 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1216 03:36:07.778116    9380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:36:07.781697    9380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:36:07.867863    9380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:36:07.875754    9380 certs.go:68] Setting up /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000 for IP: 10.0.2.15
	I1216 03:36:07.875765    9380 certs.go:194] generating shared ca certs ...
	I1216 03:36:07.875774    9380 certs.go:226] acquiring lock for ca certs: {Name:mk67ed11e928c780dd2836c87a10670f4077fd06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:36:07.876231    9380 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.key
	I1216 03:36:07.876424    9380 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/proxy-client-ca.key
	I1216 03:36:07.876431    9380 certs.go:256] generating profile certs ...
	I1216 03:36:07.877021    9380 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/client.key
	I1216 03:36:07.877034    9380 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.key.b01e0a6f
	I1216 03:36:07.877043    9380 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.crt.b01e0a6f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1216 03:36:07.961880    9380 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.crt.b01e0a6f ...
	I1216 03:36:07.961912    9380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.crt.b01e0a6f: {Name:mka4055725d465d18fc93d3e1f923a8cb0272289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:36:07.962302    9380 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.key.b01e0a6f ...
	I1216 03:36:07.962307    9380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.key.b01e0a6f: {Name:mkdebb50ca226af595cfefa2d7a42b38d341b7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:36:07.962628    9380 certs.go:381] copying /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.crt.b01e0a6f -> /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.crt
	I1216 03:36:07.962799    9380 certs.go:385] copying /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.key.b01e0a6f -> /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.key
	I1216 03:36:07.963176    9380 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/proxy-client.key
	I1216 03:36:07.963345    9380 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/7256.pem (1338 bytes)
	W1216 03:36:07.963538    9380 certs.go:480] ignoring /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/7256_empty.pem, impossibly tiny 0 bytes
	I1216 03:36:07.963547    9380 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:36:07.963571    9380 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem (1082 bytes)
	I1216 03:36:07.963590    9380 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:36:07.963619    9380 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/key.pem (1679 bytes)
	I1216 03:36:07.963667    9380 certs.go:484] found cert: /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/ssl/certs/72562.pem (1708 bytes)
	I1216 03:36:07.965089    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:36:07.972129    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 03:36:07.978786    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:36:07.985535    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 03:36:07.992546    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 03:36:07.999174    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 03:36:08.005723    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:36:08.012875    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 03:36:08.020564    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/ssl/certs/72562.pem --> /usr/share/ca-certificates/72562.pem (1708 bytes)
	I1216 03:36:08.027706    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:36:08.034429    9380 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/7256.pem --> /usr/share/ca-certificates/7256.pem (1338 bytes)
	I1216 03:36:08.041094    9380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:36:08.046262    9380 ssh_runner.go:195] Run: openssl version
	I1216 03:36:08.048145    9380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72562.pem && ln -fs /usr/share/ca-certificates/72562.pem /etc/ssl/certs/72562.pem"
	I1216 03:36:08.051240    9380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72562.pem
	I1216 03:36:08.052659    9380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 11:24 /usr/share/ca-certificates/72562.pem
	I1216 03:36:08.052688    9380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72562.pem
	I1216 03:36:08.054521    9380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72562.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 03:36:08.057696    9380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 03:36:08.060937    9380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:36:08.062398    9380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:36:08.062421    9380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:36:08.064081    9380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 03:36:08.066994    9380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7256.pem && ln -fs /usr/share/ca-certificates/7256.pem /etc/ssl/certs/7256.pem"
	I1216 03:36:08.069831    9380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7256.pem
	I1216 03:36:08.071244    9380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 11:24 /usr/share/ca-certificates/7256.pem
	I1216 03:36:08.071274    9380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7256.pem
	I1216 03:36:08.072929    9380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7256.pem /etc/ssl/certs/51391683.0"
	I1216 03:36:08.076387    9380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:36:08.077752    9380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 03:36:08.080452    9380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 03:36:08.082523    9380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 03:36:08.084470    9380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 03:36:08.086421    9380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 03:36:08.088166    9380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 03:36:08.090187    9380 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61010 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 03:36:08.090264    9380 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 03:36:08.099931    9380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:36:08.103158    9380 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 03:36:08.103229    9380 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 03:36:08.103259    9380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 03:36:08.105911    9380 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:36:08.106086    9380 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-873000" does not appear in /Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:36:08.106104    9380 kubeconfig.go:62] /Users/jenkins/minikube-integration/20107-6737/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-873000" cluster setting kubeconfig missing "stopped-upgrade-873000" context setting]
	I1216 03:36:08.106255    9380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/kubeconfig: {Name:mk517290cc56e622570f1566006f8aa91b83e6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:36:08.107717    9380 kapi.go:59] client config for stopped-upgrade-873000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/client.key", CAFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023def70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:36:08.113802    9380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 03:36:08.116709    9380 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-873000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1216 03:36:08.116717    9380 kubeadm.go:1160] stopping kube-system containers ...
	I1216 03:36:08.116769    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 03:36:08.128780    9380 docker.go:483] Stopping containers: [a4412edbd9c5 f9b71e989c7d 0fc66b9d9d48 b0550fbe622c 42371a91c62d 6e4d0e48c7b4 3fbcde1f3ea4 12512226f2ee]
	I1216 03:36:08.128864    9380 ssh_runner.go:195] Run: docker stop a4412edbd9c5 f9b71e989c7d 0fc66b9d9d48 b0550fbe622c 42371a91c62d 6e4d0e48c7b4 3fbcde1f3ea4 12512226f2ee
	I1216 03:36:08.139305    9380 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 03:36:08.144856    9380 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:36:08.148056    9380 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:36:08.148061    9380 kubeadm.go:157] found existing configuration files:
	
	I1216 03:36:08.148089    9380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/admin.conf
	I1216 03:36:08.150665    9380 kubeadm.go:163] "https://control-plane.minikube.internal:61010" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:36:08.150696    9380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:36:08.153320    9380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/kubelet.conf
	I1216 03:36:08.156351    9380 kubeadm.go:163] "https://control-plane.minikube.internal:61010" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:36:08.156384    9380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:36:08.159072    9380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/controller-manager.conf
	I1216 03:36:08.161495    9380 kubeadm.go:163] "https://control-plane.minikube.internal:61010" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:36:08.161522    9380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:36:08.164764    9380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/scheduler.conf
	I1216 03:36:08.167598    9380 kubeadm.go:163] "https://control-plane.minikube.internal:61010" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:36:08.167628    9380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:36:08.170114    9380 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:36:08.173222    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:08.196978    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:08.537544    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:08.673822    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:08.696561    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:36:08.718246    9380 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:36:08.718340    9380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:09.221588    9380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:09.720424    9380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:36:09.725284    9380 api_server.go:72] duration metric: took 1.007058083s to wait for apiserver process to appear ...
	I1216 03:36:09.725300    9380 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:36:09.725543    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:14.728542    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:14.728654    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:19.729685    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:19.729709    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:24.730464    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:24.730490    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:29.731477    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:29.731508    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:34.732804    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:34.732827    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:39.734454    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:39.734511    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:44.735105    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:44.735144    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:49.737306    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:49.737336    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:54.737690    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:54.737735    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:36:59.739957    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:36:59.739999    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:04.742189    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:04.742228    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:09.743539    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:09.744503    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:09.764183    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:37:09.764282    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:09.775033    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:37:09.775116    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:09.794542    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:37:09.794620    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:09.805035    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:37:09.805119    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:09.815438    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:37:09.815511    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:09.828658    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:37:09.828735    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:09.839153    9380 logs.go:282] 0 containers: []
	W1216 03:37:09.839165    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:09.839235    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:09.858101    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:37:09.858128    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:09.858133    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:09.883852    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:09.883864    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:09.921797    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:09.921808    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:10.033423    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:37:10.033437    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:37:10.045106    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:37:10.045119    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:37:10.060865    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:37:10.060877    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:37:10.077963    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:37:10.077976    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:37:10.094949    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:37:10.094963    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:37:10.108976    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:37:10.108987    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:37:10.153451    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:37:10.153469    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:37:10.168855    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:37:10.168868    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:37:10.184844    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:37:10.184856    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:10.196902    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:37:10.196916    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:37:10.211293    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:37:10.211303    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:37:10.225039    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:10.225050    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:10.229538    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:37:10.229545    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:37:10.242875    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:37:10.242886    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:37:12.757509    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:17.759720    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:17.759985    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:17.791721    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:37:17.791852    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:17.806706    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:37:17.806795    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:17.818726    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:37:17.818802    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:17.829940    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:37:17.830020    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:17.840496    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:37:17.840590    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:17.851571    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:37:17.851647    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:17.861217    9380 logs.go:282] 0 containers: []
	W1216 03:37:17.861227    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:17.861303    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:17.871933    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:37:17.871953    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:17.871958    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:17.909756    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:37:17.909767    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:37:17.924171    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:37:17.924183    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:37:17.942222    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:37:17.942233    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:37:17.953694    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:37:17.953706    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:37:17.965159    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:37:17.965169    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:37:17.979384    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:37:17.979394    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:37:17.991073    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:37:17.991087    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:37:18.008310    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:18.008322    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:18.012525    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:18.012533    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:18.051746    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:37:18.051760    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:37:18.065969    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:18.065981    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:18.091956    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:37:18.091967    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:37:18.129496    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:37:18.129512    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:37:18.144114    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:37:18.144126    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:37:18.158793    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:37:18.158803    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:37:18.170397    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:37:18.170412    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:20.685308    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:25.687742    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:25.688272    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:25.718420    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:37:25.718568    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:25.737212    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:37:25.737331    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:25.751040    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:37:25.751131    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:25.764416    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:37:25.764501    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:25.774994    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:37:25.775074    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:25.785620    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:37:25.785693    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:25.796089    9380 logs.go:282] 0 containers: []
	W1216 03:37:25.796100    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:25.796165    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:25.806229    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:37:25.806251    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:37:25.806256    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:37:25.822181    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:37:25.822195    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:37:25.833937    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:37:25.833951    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:37:25.848719    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:37:25.848731    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:37:25.860587    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:37:25.860598    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:25.873146    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:25.873158    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:25.907978    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:37:25.907993    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:37:25.923508    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:37:25.923523    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:37:25.962194    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:37:25.962204    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:37:25.982567    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:37:25.982580    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:37:25.994511    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:25.994525    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:26.018387    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:26.018398    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:26.054827    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:26.054834    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:26.058809    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:37:26.058817    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:37:26.070106    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:37:26.070118    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:37:26.083969    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:37:26.083983    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:37:26.104775    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:37:26.104786    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:37:28.622225    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:33.624493    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:33.624631    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:33.648183    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:37:33.648267    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:33.658817    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:37:33.658898    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:33.669165    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:37:33.669239    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:33.681786    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:37:33.681870    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:33.692647    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:37:33.692732    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:33.703492    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:37:33.703576    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:33.713756    9380 logs.go:282] 0 containers: []
	W1216 03:37:33.713767    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:33.713835    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:33.724165    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:37:33.724184    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:37:33.724189    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:37:33.737728    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:37:33.737745    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:37:33.751972    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:37:33.751986    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:37:33.767645    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:37:33.767657    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:37:33.779206    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:37:33.779214    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:37:33.793950    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:37:33.793960    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:37:33.806459    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:33.806469    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:33.844139    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:37:33.844152    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:37:33.855661    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:37:33.855670    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:37:33.866662    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:37:33.866676    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:37:33.881479    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:37:33.881490    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:37:33.898369    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:37:33.898378    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:37:33.909234    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:33.909243    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:33.933304    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:37:33.933315    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:37:33.971205    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:33.971220    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:34.008113    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:37:34.008125    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:34.020421    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:34.020433    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:36.526940    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:41.529236    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:41.529565    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:41.554852    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:37:41.554978    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:41.571263    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:37:41.571366    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:41.584235    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:37:41.584321    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:41.603644    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:37:41.603726    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:41.614580    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:37:41.614664    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:41.625105    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:37:41.625181    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:41.635839    9380 logs.go:282] 0 containers: []
	W1216 03:37:41.635850    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:41.635918    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:41.646117    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:37:41.646133    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:37:41.646140    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:37:41.684069    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:37:41.684081    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:37:41.707458    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:41.707470    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:41.731441    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:41.731467    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:41.735525    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:37:41.735534    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:37:41.749309    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:37:41.749318    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:37:41.763686    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:37:41.763696    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:41.775663    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:41.775674    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:41.812987    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:41.812997    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:41.848639    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:37:41.848650    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:37:41.870402    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:37:41.870413    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:37:41.882249    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:37:41.882260    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:37:41.894257    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:37:41.894267    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:37:41.905616    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:37:41.905627    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:37:41.919719    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:37:41.919729    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:37:41.931920    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:37:41.931932    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:37:41.944390    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:37:41.944400    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:37:44.463867    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:49.466173    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:49.466446    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:49.487131    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:37:49.487249    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:49.502468    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:37:49.502549    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:49.516496    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:37:49.516575    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:49.527960    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:37:49.528048    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:49.538257    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:37:49.538338    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:49.548474    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:37:49.548557    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:49.559607    9380 logs.go:282] 0 containers: []
	W1216 03:37:49.559619    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:49.559692    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:49.569888    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:37:49.569908    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:37:49.569914    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:37:49.581403    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:37:49.581419    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:37:49.596728    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:49.596739    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:49.634002    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:37:49.634014    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:37:49.649403    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:37:49.649418    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:37:49.661743    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:37:49.661755    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:37:49.678776    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:37:49.678787    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:49.691051    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:49.691063    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:49.695297    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:37:49.695305    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:37:49.737140    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:37:49.737155    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:37:49.751405    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:37:49.751416    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:37:49.770808    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:37:49.770819    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:37:49.783894    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:37:49.783909    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:37:49.795560    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:49.795571    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:49.833015    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:37:49.833027    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:37:49.846917    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:37:49.846927    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:37:49.858965    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:49.858975    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:52.384148    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:37:57.386472    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:37:57.386680    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:37:57.406712    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:37:57.406818    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:37:57.420193    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:37:57.420277    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:37:57.432197    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:37:57.432273    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:37:57.442619    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:37:57.442700    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:37:57.452941    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:37:57.453022    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:37:57.463878    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:37:57.463952    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:37:57.473706    9380 logs.go:282] 0 containers: []
	W1216 03:37:57.473725    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:37:57.473795    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:37:57.484141    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:37:57.484159    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:37:57.484166    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:37:57.495815    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:37:57.495828    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:37:57.507093    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:37:57.507103    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:37:57.531198    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:37:57.531206    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:37:57.566297    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:37:57.566312    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:37:57.577661    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:37:57.577672    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:37:57.592927    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:37:57.592942    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:37:57.607596    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:37:57.607606    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:37:57.621895    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:37:57.621908    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:37:57.639143    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:37:57.639153    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:37:57.653311    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:37:57.653323    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:37:57.689977    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:37:57.689986    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:37:57.694037    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:37:57.694044    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:37:57.709636    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:37:57.709648    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:37:57.721624    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:37:57.721637    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:37:57.759925    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:37:57.759936    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:37:57.783425    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:37:57.783435    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:38:00.295908    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:05.298198    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:05.298395    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:05.313059    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:38:05.313143    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:05.326730    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:38:05.326815    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:05.337042    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:38:05.337117    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:05.347524    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:38:05.347603    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:05.358476    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:38:05.358553    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:05.374074    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:38:05.374162    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:05.388999    9380 logs.go:282] 0 containers: []
	W1216 03:38:05.389014    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:05.389084    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:05.400213    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:38:05.400233    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:38:05.400239    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:38:05.415262    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:38:05.415270    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:38:05.429612    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:38:05.429624    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:38:05.449208    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:38:05.449220    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:38:05.462536    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:38:05.462550    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:05.474139    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:05.474149    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:05.510667    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:38:05.510680    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:38:05.551401    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:38:05.551412    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:38:05.562458    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:38:05.562473    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:38:05.574243    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:05.574252    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:05.599374    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:05.599382    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:05.603465    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:38:05.603471    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:38:05.618220    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:38:05.618234    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:38:05.630102    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:38:05.630113    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:38:05.644424    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:05.644438    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:05.680614    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:38:05.680625    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:38:05.698318    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:38:05.698328    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:38:08.211596    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:13.213819    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:13.214046    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:13.231443    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:38:13.231526    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:13.244793    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:38:13.244863    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:13.255020    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:38:13.255102    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:13.267112    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:38:13.267187    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:13.277524    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:38:13.277591    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:13.288131    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:38:13.288198    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:13.297852    9380 logs.go:282] 0 containers: []
	W1216 03:38:13.297862    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:13.297926    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:13.309847    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:38:13.309869    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:38:13.309876    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:38:13.321961    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:38:13.321973    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:38:13.333210    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:13.333221    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:13.358052    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:13.358059    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:13.395593    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:38:13.395603    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:38:13.410961    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:38:13.410976    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:38:13.425783    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:38:13.425797    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:38:13.436654    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:13.436667    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:13.474167    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:38:13.474179    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:38:13.488997    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:38:13.489008    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:38:13.506062    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:38:13.506073    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:38:13.520557    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:38:13.520567    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:13.533357    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:38:13.533367    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:38:13.547726    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:38:13.547742    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:38:13.559898    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:38:13.559908    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:38:13.598317    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:13.598328    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:13.602465    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:38:13.602470    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:38:16.118271    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:21.118850    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:21.119050    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:21.135947    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:38:21.136043    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:21.148362    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:38:21.148455    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:21.159563    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:38:21.159644    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:21.170633    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:38:21.170719    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:21.181089    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:38:21.181166    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:21.191363    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:38:21.191440    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:21.201504    9380 logs.go:282] 0 containers: []
	W1216 03:38:21.201519    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:21.201588    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:21.212923    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:38:21.212940    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:38:21.212945    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:38:21.251258    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:38:21.251272    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:38:21.268829    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:38:21.268841    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:38:21.280782    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:21.280792    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:21.319306    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:21.319314    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:21.354310    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:38:21.354326    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:38:21.366484    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:38:21.366495    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:38:21.377795    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:38:21.377809    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:21.389313    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:21.389324    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:21.393625    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:38:21.393631    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:38:21.405165    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:38:21.405179    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:38:21.416576    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:38:21.416585    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:38:21.430741    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:38:21.430749    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:38:21.445073    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:38:21.445083    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:38:21.463669    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:38:21.463681    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:38:21.478039    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:21.478051    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:21.500894    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:38:21.500901    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:38:24.015843    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:29.016022    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:29.016279    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:29.045002    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:38:29.045121    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:29.062070    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:38:29.062156    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:29.077027    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:38:29.077107    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:29.092089    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:38:29.092169    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:29.102752    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:38:29.102835    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:29.113639    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:38:29.113714    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:29.123704    9380 logs.go:282] 0 containers: []
	W1216 03:38:29.123717    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:29.123787    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:29.134093    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:38:29.134112    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:38:29.134118    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:38:29.149068    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:29.149083    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:29.183596    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:38:29.183610    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:38:29.199676    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:38:29.199691    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:38:29.223798    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:29.223809    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:29.260614    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:38:29.260625    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:38:29.281959    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:38:29.281971    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:38:29.319356    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:38:29.319368    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:38:29.333854    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:38:29.333863    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:38:29.348078    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:38:29.348091    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:38:29.366572    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:38:29.366584    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:38:29.380939    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:29.380952    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:29.385402    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:38:29.385410    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:38:29.397351    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:38:29.397365    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:38:29.409618    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:38:29.409632    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:38:29.420940    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:29.420951    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:29.446029    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:38:29.446036    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:31.961518    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:36.963829    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:36.964420    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:37.003825    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:38:37.003982    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:37.030695    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:38:37.030788    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:37.045003    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:38:37.045091    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:37.056882    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:38:37.056960    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:37.067410    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:38:37.067483    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:37.085806    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:38:37.085881    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:37.099894    9380 logs.go:282] 0 containers: []
	W1216 03:38:37.099904    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:37.099967    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:37.110543    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:38:37.110562    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:37.110568    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:37.117121    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:38:37.117131    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:38:37.134748    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:38:37.134762    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:37.147048    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:38:37.147057    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:38:37.158583    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:38:37.158594    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:38:37.174886    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:38:37.174915    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:38:37.186510    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:37.186524    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:37.211277    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:37.211288    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:37.250119    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:38:37.250129    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:38:37.288208    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:38:37.288220    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:38:37.303872    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:38:37.303884    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:38:37.315187    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:38:37.315201    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:38:37.330380    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:38:37.330393    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:38:37.341920    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:37.341931    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:37.377733    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:38:37.377749    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:38:37.391418    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:38:37.391431    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:38:37.406175    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:38:37.406190    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:38:39.925552    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:44.927886    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:44.928166    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:44.953822    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:38:44.953969    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:44.973148    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:38:44.973257    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:44.986241    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:38:44.986320    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:44.997180    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:38:44.997261    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:45.008015    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:38:45.008092    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:45.018413    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:38:45.018492    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:45.028269    9380 logs.go:282] 0 containers: []
	W1216 03:38:45.028283    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:45.028347    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:45.039785    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:38:45.039814    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:38:45.039819    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:38:45.057466    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:38:45.057476    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:38:45.095626    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:38:45.095640    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:38:45.109789    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:38:45.109800    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:38:45.120967    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:45.120978    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:45.156580    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:38:45.156593    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:38:45.171161    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:38:45.171173    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:38:45.182283    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:38:45.182300    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:38:45.194046    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:38:45.194057    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:38:45.209282    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:38:45.209294    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:38:45.227945    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:38:45.227954    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:38:45.239511    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:38:45.239522    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:45.251861    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:45.251871    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:45.288713    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:38:45.288724    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:38:45.300719    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:38:45.300733    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:38:45.315739    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:45.315750    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:45.320406    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:45.320412    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:47.844098    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:38:52.846350    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:38:52.846828    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:38:52.879510    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:38:52.879645    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:38:52.898831    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:38:52.898943    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:38:52.913482    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:38:52.913562    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:38:52.925480    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:38:52.925555    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:38:52.936376    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:38:52.936454    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:38:52.947164    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:38:52.947246    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:38:52.958504    9380 logs.go:282] 0 containers: []
	W1216 03:38:52.958515    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:38:52.958589    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:38:52.969063    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:38:52.969079    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:38:52.969084    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:38:52.985438    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:38:52.985451    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:38:52.999115    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:38:52.999126    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:38:53.011597    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:38:53.011610    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:38:53.016525    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:38:53.016533    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:38:53.032026    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:38:53.032037    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:38:53.044155    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:38:53.044165    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:38:53.059216    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:38:53.059227    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:38:53.071373    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:38:53.071382    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:38:53.106888    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:38:53.106903    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:38:53.123052    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:38:53.123067    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:38:53.138607    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:38:53.138617    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:38:53.152867    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:38:53.152878    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:38:53.164839    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:38:53.164849    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:38:53.187545    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:38:53.187553    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:38:53.225358    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:38:53.225371    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:38:53.266876    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:38:53.266887    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:38:55.786994    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:00.789259    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:00.789533    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:00.808884    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:39:00.808988    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:00.822624    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:39:00.822712    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:00.834681    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:39:00.834759    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:00.845038    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:39:00.845126    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:00.855660    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:39:00.855736    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:00.866020    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:39:00.866101    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:00.876327    9380 logs.go:282] 0 containers: []
	W1216 03:39:00.876346    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:00.876414    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:00.893603    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:39:00.893624    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:39:00.893629    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:39:00.908141    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:39:00.908152    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:39:00.922581    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:39:00.922592    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:00.935404    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:00.935416    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:00.974459    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:00.974475    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:01.009381    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:39:01.009393    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:39:01.026506    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:01.026516    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:01.050154    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:39:01.050162    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:39:01.063859    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:39:01.063872    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:39:01.075389    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:39:01.075399    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:39:01.092375    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:39:01.092384    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:39:01.134347    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:39:01.134358    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:39:01.149827    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:39:01.149844    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:39:01.161647    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:39:01.161661    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:39:01.177814    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:01.177825    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:01.182110    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:39:01.182118    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:39:01.193263    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:39:01.193273    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:39:03.712944    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:08.715314    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:08.715548    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:08.736816    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:39:08.736931    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:08.751381    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:39:08.751468    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:08.763605    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:39:08.763683    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:08.775075    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:39:08.775155    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:08.785688    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:39:08.785775    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:08.796135    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:39:08.796211    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:08.807774    9380 logs.go:282] 0 containers: []
	W1216 03:39:08.807789    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:08.807860    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:08.818524    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:39:08.818542    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:08.818548    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:08.855206    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:39:08.855214    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:39:08.869718    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:39:08.869730    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:39:08.884182    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:39:08.884192    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:08.896333    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:39:08.896345    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:39:08.910433    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:39:08.910446    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:39:08.947691    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:39:08.947701    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:39:08.960357    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:08.960367    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:08.984875    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:08.984887    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:09.023333    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:39:09.023344    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:39:09.037798    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:39:09.037810    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:39:09.049467    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:39:09.049478    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:39:09.060793    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:09.060808    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:09.064897    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:39:09.064905    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:39:09.079152    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:39:09.079162    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:39:09.097104    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:39:09.097116    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:39:09.110484    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:39:09.110493    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:39:11.627982    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:16.629941    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:16.630226    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:16.656889    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:39:16.657036    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:16.676182    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:39:16.676273    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:16.691061    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:39:16.691142    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:16.709433    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:39:16.709505    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:16.719501    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:39:16.719582    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:16.730256    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:39:16.730345    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:16.741284    9380 logs.go:282] 0 containers: []
	W1216 03:39:16.741297    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:16.741366    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:16.752225    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:39:16.752244    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:39:16.752251    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:39:16.766380    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:39:16.766393    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:39:16.781088    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:39:16.781101    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:39:16.796449    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:39:16.796460    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:39:16.807836    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:16.807848    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:16.812662    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:39:16.812669    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:39:16.826293    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:39:16.826304    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:39:16.841633    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:39:16.841644    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:39:16.858487    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:39:16.858498    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:39:16.895735    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:16.895746    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:16.937932    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:39:16.937947    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:39:16.949193    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:39:16.949203    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:39:16.961183    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:39:16.961197    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:39:16.972703    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:16.972715    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:16.996533    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:39:16.996542    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:17.008092    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:17.008102    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:17.047587    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:39:17.047607    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:39:19.561812    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:24.564091    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:24.564297    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:24.575598    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:39:24.575679    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:24.586411    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:39:24.586487    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:24.596541    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:39:24.596629    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:24.607614    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:39:24.607690    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:24.618645    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:39:24.618733    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:24.629724    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:39:24.629805    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:24.639667    9380 logs.go:282] 0 containers: []
	W1216 03:39:24.639679    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:24.639743    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:24.651107    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:39:24.651125    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:24.651132    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:24.655874    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:39:24.655882    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:39:24.667191    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:24.667204    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:24.692301    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:24.692310    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:24.726115    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:39:24.726129    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:39:24.737701    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:39:24.737715    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:39:24.754469    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:39:24.754483    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:39:24.769089    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:39:24.769100    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:39:24.780417    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:39:24.780428    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:39:24.820946    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:39:24.820961    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:39:24.835365    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:39:24.835378    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:39:24.855551    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:39:24.855565    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:39:24.870550    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:24.870559    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:24.908711    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:39:24.908719    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:39:24.924174    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:39:24.924185    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:39:24.935852    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:39:24.935862    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:39:24.947655    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:39:24.947668    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:27.460491    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:32.461545    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:32.462040    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:32.529000    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:39:32.529110    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:32.553569    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:39:32.553656    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:32.563995    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:39:32.564078    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:32.574802    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:39:32.574893    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:32.585888    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:39:32.585968    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:32.596321    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:39:32.596393    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:32.607946    9380 logs.go:282] 0 containers: []
	W1216 03:39:32.607958    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:32.608029    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:32.618646    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:39:32.618666    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:39:32.618672    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:32.631689    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:39:32.631704    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:39:32.643510    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:39:32.643523    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:39:32.655583    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:32.655594    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:32.691491    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:39:32.691505    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:39:32.708985    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:32.708996    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:32.747168    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:32.747178    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:32.751636    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:39:32.751644    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:39:32.763219    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:39:32.763230    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:39:32.778242    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:39:32.778253    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:39:32.789138    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:39:32.789153    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:39:32.803312    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:39:32.803325    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:39:32.844217    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:39:32.844235    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:39:32.858388    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:39:32.858401    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:39:32.869947    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:32.869956    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:32.894171    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:39:32.894180    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:39:32.908350    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:39:32.908361    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:39:35.424689    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:40.427022    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:40.427510    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:40.465144    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:39:40.465307    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:40.486038    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:39:40.486158    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:40.500903    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:39:40.500998    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:40.513136    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:39:40.513224    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:40.523568    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:39:40.523659    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:40.534634    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:39:40.534719    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:40.545950    9380 logs.go:282] 0 containers: []
	W1216 03:39:40.545963    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:40.546032    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:40.556894    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:39:40.556911    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:40.556916    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:40.593884    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:39:40.593899    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:39:40.608697    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:39:40.608707    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:39:40.624458    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:39:40.624468    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:39:40.637125    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:39:40.637135    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:39:40.654029    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:39:40.654039    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:39:40.665318    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:40.665328    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:40.687804    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:40.687812    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:40.727124    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:40.727135    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:40.731626    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:39:40.731659    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:39:40.770748    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:39:40.770758    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:39:40.782279    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:39:40.782292    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:39:40.793592    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:39:40.793606    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:39:40.805111    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:39:40.805121    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:40.816712    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:39:40.816723    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:39:40.832116    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:39:40.832126    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:39:40.846928    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:39:40.846937    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:39:43.363884    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:48.366138    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:48.366409    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:48.387418    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:39:48.387538    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:48.402016    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:39:48.402104    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:48.414647    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:39:48.414728    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:48.425316    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:39:48.425405    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:48.435655    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:39:48.435738    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:48.445729    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:39:48.445806    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:48.456212    9380 logs.go:282] 0 containers: []
	W1216 03:39:48.456224    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:48.456292    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:48.467148    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:39:48.467166    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:39:48.467171    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:39:48.480751    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:39:48.480763    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:39:48.496612    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:39:48.496623    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:39:48.514257    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:39:48.514270    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:39:48.552305    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:39:48.552317    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:39:48.564194    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:39:48.564206    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:39:48.579667    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:39:48.579680    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:39:48.598019    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:39:48.598031    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:39:48.615446    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:48.615457    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:48.638902    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:48.638912    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:48.677296    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:48.677308    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:48.681595    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:48.681603    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:48.718490    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:39:48.718501    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:39:48.733823    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:39:48.733834    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:39:48.745423    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:39:48.745433    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:39:48.756854    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:39:48.756863    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:39:48.768226    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:39:48.768238    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:51.282352    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:39:56.284715    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:39:56.285249    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:39:56.327281    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:39:56.327429    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:39:56.346223    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:39:56.346336    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:39:56.361416    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:39:56.361499    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:39:56.373544    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:39:56.373621    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:39:56.384184    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:39:56.384263    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:39:56.394866    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:39:56.394947    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:39:56.405796    9380 logs.go:282] 0 containers: []
	W1216 03:39:56.405807    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:39:56.405870    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:39:56.416330    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:39:56.416351    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:39:56.416357    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:39:56.427702    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:39:56.427714    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:39:56.431843    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:39:56.431853    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:39:56.445783    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:39:56.445793    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:39:56.460730    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:39:56.460745    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:39:56.476918    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:39:56.476933    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:39:56.490707    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:39:56.490723    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:39:56.527340    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:39:56.527348    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:39:56.566148    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:39:56.566160    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:39:56.578165    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:39:56.578177    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:39:56.598273    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:39:56.598285    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:39:56.611993    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:39:56.612005    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:39:56.626241    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:39:56.626251    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:39:56.637936    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:39:56.637950    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:39:56.666890    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:39:56.666902    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:39:56.718199    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:39:56.718213    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:39:56.737261    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:39:56.737277    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:39:59.261339    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:04.263796    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:04.264209    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:40:04.300007    9380 logs.go:282] 2 containers: [aa73f7f1e8b6 a4412edbd9c5]
	I1216 03:40:04.300148    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:40:04.318349    9380 logs.go:282] 2 containers: [d891d788317b f9b71e989c7d]
	I1216 03:40:04.318453    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:40:04.333576    9380 logs.go:282] 1 containers: [7018079ce1a9]
	I1216 03:40:04.333668    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:40:04.345866    9380 logs.go:282] 2 containers: [1fedd85b1777 42371a91c62d]
	I1216 03:40:04.345945    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:40:04.364364    9380 logs.go:282] 1 containers: [ed6ae570d946]
	I1216 03:40:04.364447    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:40:04.376165    9380 logs.go:282] 2 containers: [b1b29322b5bb 0fc66b9d9d48]
	I1216 03:40:04.376242    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:40:04.386223    9380 logs.go:282] 0 containers: []
	W1216 03:40:04.386238    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:40:04.386305    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:40:04.397523    9380 logs.go:282] 2 containers: [5853a92189ef 5032e972bb3a]
	I1216 03:40:04.397543    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:40:04.397549    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:40:04.402373    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:40:04.402381    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:40:04.436952    9380 logs.go:123] Gathering logs for kube-apiserver [aa73f7f1e8b6] ...
	I1216 03:40:04.436966    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa73f7f1e8b6"
	I1216 03:40:04.451383    9380 logs.go:123] Gathering logs for etcd [d891d788317b] ...
	I1216 03:40:04.451395    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d891d788317b"
	I1216 03:40:04.470573    9380 logs.go:123] Gathering logs for kube-controller-manager [b1b29322b5bb] ...
	I1216 03:40:04.470586    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b29322b5bb"
	I1216 03:40:04.488438    9380 logs.go:123] Gathering logs for storage-provisioner [5853a92189ef] ...
	I1216 03:40:04.488448    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5853a92189ef"
	I1216 03:40:04.499819    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:40:04.499833    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:40:04.511647    9380 logs.go:123] Gathering logs for kube-apiserver [a4412edbd9c5] ...
	I1216 03:40:04.511659    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4412edbd9c5"
	I1216 03:40:04.549466    9380 logs.go:123] Gathering logs for coredns [7018079ce1a9] ...
	I1216 03:40:04.549483    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7018079ce1a9"
	I1216 03:40:04.561041    9380 logs.go:123] Gathering logs for kube-scheduler [1fedd85b1777] ...
	I1216 03:40:04.561056    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fedd85b1777"
	I1216 03:40:04.573047    9380 logs.go:123] Gathering logs for kube-proxy [ed6ae570d946] ...
	I1216 03:40:04.573057    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed6ae570d946"
	I1216 03:40:04.584660    9380 logs.go:123] Gathering logs for kube-controller-manager [0fc66b9d9d48] ...
	I1216 03:40:04.584670    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fc66b9d9d48"
	I1216 03:40:04.599396    9380 logs.go:123] Gathering logs for storage-provisioner [5032e972bb3a] ...
	I1216 03:40:04.599406    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5032e972bb3a"
	I1216 03:40:04.611257    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:40:04.611270    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:40:04.650356    9380 logs.go:123] Gathering logs for etcd [f9b71e989c7d] ...
	I1216 03:40:04.650365    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b71e989c7d"
	I1216 03:40:04.666401    9380 logs.go:123] Gathering logs for kube-scheduler [42371a91c62d] ...
	I1216 03:40:04.666411    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42371a91c62d"
	I1216 03:40:04.683150    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:40:04.683161    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:40:07.208547    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:12.210788    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:12.210851    9380 kubeadm.go:597] duration metric: took 4m4.112160459s to restartPrimaryControlPlane
	W1216 03:40:12.210904    9380 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:40:12.210928    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 03:40:13.247332    9380 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036412417s)
	I1216 03:40:13.247421    9380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:40:13.252408    9380 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:40:13.255163    9380 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:40:13.258019    9380 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:40:13.258025    9380 kubeadm.go:157] found existing configuration files:
	
	I1216 03:40:13.258058    9380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/admin.conf
	I1216 03:40:13.260995    9380 kubeadm.go:163] "https://control-plane.minikube.internal:61010" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:40:13.261026    9380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:40:13.263740    9380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/kubelet.conf
	I1216 03:40:13.266374    9380 kubeadm.go:163] "https://control-plane.minikube.internal:61010" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:40:13.266410    9380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:40:13.269774    9380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/controller-manager.conf
	I1216 03:40:13.272649    9380 kubeadm.go:163] "https://control-plane.minikube.internal:61010" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:40:13.272672    9380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:40:13.275412    9380 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/scheduler.conf
	I1216 03:40:13.278383    9380 kubeadm.go:163] "https://control-plane.minikube.internal:61010" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61010 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:40:13.278427    9380 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:40:13.281842    9380 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 03:40:13.300872    9380 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1216 03:40:13.300916    9380 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 03:40:13.352041    9380 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:40:13.352124    9380 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:40:13.352181    9380 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 03:40:13.405728    9380 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:40:13.409643    9380 out.go:235]   - Generating certificates and keys ...
	I1216 03:40:13.409678    9380 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 03:40:13.409712    9380 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 03:40:13.409760    9380 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:40:13.409799    9380 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:40:13.409844    9380 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:40:13.409886    9380 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 03:40:13.409942    9380 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:40:13.409975    9380 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:40:13.410026    9380 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:40:13.410067    9380 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:40:13.410091    9380 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 03:40:13.410124    9380 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:40:13.496330    9380 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:40:13.647829    9380 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:40:13.681825    9380 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:40:13.758377    9380 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:40:13.787347    9380 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:40:13.787870    9380 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:40:13.787896    9380 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 03:40:13.875842    9380 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:40:13.879560    9380 out.go:235]   - Booting up control plane ...
	I1216 03:40:13.879612    9380 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:40:13.879650    9380 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:40:13.879685    9380 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:40:13.879727    9380 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:40:13.883963    9380 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 03:40:18.388696    9380 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504565 seconds
	I1216 03:40:18.388789    9380 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:40:18.394084    9380 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:40:18.906563    9380 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:40:18.906763    9380 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-873000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:40:19.410802    9380 kubeadm.go:310] [bootstrap-token] Using token: anca94.c9exhlhjria45dqv
	I1216 03:40:19.417341    9380 out.go:235]   - Configuring RBAC rules ...
	I1216 03:40:19.417414    9380 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:40:19.417464    9380 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:40:19.424300    9380 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:40:19.425313    9380 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:40:19.426143    9380 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:40:19.426982    9380 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:40:19.430062    9380 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:40:19.613717    9380 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 03:40:19.816577    9380 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 03:40:19.816962    9380 kubeadm.go:310] 
	I1216 03:40:19.817015    9380 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 03:40:19.817020    9380 kubeadm.go:310] 
	I1216 03:40:19.817060    9380 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 03:40:19.817063    9380 kubeadm.go:310] 
	I1216 03:40:19.817080    9380 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 03:40:19.817124    9380 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:40:19.817150    9380 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:40:19.817154    9380 kubeadm.go:310] 
	I1216 03:40:19.817182    9380 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 03:40:19.817187    9380 kubeadm.go:310] 
	I1216 03:40:19.817214    9380 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:40:19.817217    9380 kubeadm.go:310] 
	I1216 03:40:19.817246    9380 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 03:40:19.817289    9380 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:40:19.817336    9380 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:40:19.817339    9380 kubeadm.go:310] 
	I1216 03:40:19.817384    9380 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:40:19.817434    9380 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 03:40:19.817440    9380 kubeadm.go:310] 
	I1216 03:40:19.817493    9380 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token anca94.c9exhlhjria45dqv \
	I1216 03:40:19.817554    9380 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e91f5fc61f2a05b89f8c1b39ba5f2828ed76713601e7dc43cc58f3c0bc6e1119 \
	I1216 03:40:19.817567    9380 kubeadm.go:310] 	--control-plane 
	I1216 03:40:19.817571    9380 kubeadm.go:310] 
	I1216 03:40:19.817635    9380 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:40:19.817645    9380 kubeadm.go:310] 
	I1216 03:40:19.817684    9380 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token anca94.c9exhlhjria45dqv \
	I1216 03:40:19.817756    9380 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e91f5fc61f2a05b89f8c1b39ba5f2828ed76713601e7dc43cc58f3c0bc6e1119 
	I1216 03:40:19.817820    9380 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:40:19.817851    9380 cni.go:84] Creating CNI manager for ""
	I1216 03:40:19.817860    9380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:40:19.822636    9380 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:40:19.825777    9380 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:40:19.828857    9380 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:40:19.833751    9380 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:40:19.833807    9380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:40:19.834190    9380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-873000 minikube.k8s.io/updated_at=2024_12_16T03_40_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=stopped-upgrade-873000 minikube.k8s.io/primary=true
	I1216 03:40:19.877524    9380 ops.go:34] apiserver oom_adj: -16
	I1216 03:40:19.877655    9380 kubeadm.go:1113] duration metric: took 43.894125ms to wait for elevateKubeSystemPrivileges
	I1216 03:40:19.878197    9380 kubeadm.go:394] duration metric: took 4m11.792702833s to StartCluster
	I1216 03:40:19.878211    9380 settings.go:142] acquiring lock: {Name:mk408f6daa5d140b3b9f5d3d2f79a1d62bbf39fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:40:19.878398    9380 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:40:19.878800    9380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/kubeconfig: {Name:mk517290cc56e622570f1566006f8aa91b83e6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:40:19.879148    9380 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:40:19.879227    9380 config.go:182] Loaded profile config "stopped-upgrade-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 03:40:19.879157    9380 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:40:19.879321    9380 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-873000"
	I1216 03:40:19.879327    9380 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-873000"
	I1216 03:40:19.879330    9380 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-873000"
	W1216 03:40:19.879333    9380 addons.go:243] addon storage-provisioner should already be in state true
	I1216 03:40:19.879347    9380 host.go:66] Checking if "stopped-upgrade-873000" exists ...
	I1216 03:40:19.879564    9380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-873000"
	I1216 03:40:19.880536    9380 kapi.go:59] client config for stopped-upgrade-873000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/stopped-upgrade-873000/client.key", CAFile:"/Users/jenkins/minikube-integration/20107-6737/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023def70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:40:19.880858    9380 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-873000"
	W1216 03:40:19.880862    9380 addons.go:243] addon default-storageclass should already be in state true
	I1216 03:40:19.880876    9380 host.go:66] Checking if "stopped-upgrade-873000" exists ...
	I1216 03:40:19.882646    9380 out.go:177] * Verifying Kubernetes components...
	I1216 03:40:19.882982    9380 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:40:19.886656    9380 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:40:19.886663    9380 sshutil.go:53] new ssh client: &{IP:localhost Port:60975 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/id_rsa Username:docker}
	I1216 03:40:19.892626    9380 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:40:19.896782    9380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:40:19.900658    9380 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:40:19.900666    9380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:40:19.900673    9380 sshutil.go:53] new ssh client: &{IP:localhost Port:60975 SSHKeyPath:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/stopped-upgrade-873000/id_rsa Username:docker}
	I1216 03:40:19.988319    9380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:40:19.993777    9380 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:40:19.993844    9380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:40:19.998286    9380 api_server.go:72] duration metric: took 119.129375ms to wait for apiserver process to appear ...
	I1216 03:40:19.998295    9380 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:40:19.998302    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:20.004623    9380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:40:20.051420    9380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:40:20.354386    9380 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 03:40:20.354397    9380 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 03:40:25.000281    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:25.000306    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:30.000398    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:30.000424    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:35.000629    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:35.000696    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:40.001108    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:40.001162    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:45.001681    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:45.001734    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:40:50.002393    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:50.002434    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1216 03:40:50.356630    9380 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1216 03:40:50.360698    9380 out.go:177] * Enabled addons: storage-provisioner
	I1216 03:40:50.372483    9380 addons.go:510] duration metric: took 30.493890459s for enable addons: enabled=[storage-provisioner]
	I1216 03:40:55.003291    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:40:55.003328    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:00.004540    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:00.004578    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:05.006017    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:05.006046    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:10.007795    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:10.007823    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:15.009932    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:15.009972    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:20.012170    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:20.012353    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:41:20.023337    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:41:20.023424    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:41:20.033829    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:41:20.033911    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:41:20.044195    9380 logs.go:282] 2 containers: [ce6b803dfca6 bdc1c8409249]
	I1216 03:41:20.044281    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:41:20.054199    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:41:20.054277    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:41:20.069551    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:41:20.069634    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:41:20.084123    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:41:20.084195    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:41:20.095573    9380 logs.go:282] 0 containers: []
	W1216 03:41:20.095586    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:41:20.095654    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:41:20.106306    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:41:20.106322    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:41:20.106327    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:41:20.124187    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:41:20.124197    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:41:20.137508    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:41:20.137519    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:41:20.163398    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:41:20.163408    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:41:20.199246    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:41:20.199257    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:41:20.203811    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:41:20.203821    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:41:20.218089    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:41:20.218099    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:41:20.232472    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:41:20.232482    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:41:20.247660    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:41:20.247670    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:41:20.287315    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:41:20.287329    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:41:20.299113    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:41:20.299128    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:41:20.310864    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:41:20.310873    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:41:20.323009    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:41:20.323022    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:41:22.837255    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:27.839505    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:27.839683    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:41:27.850864    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:41:27.850954    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:41:27.861528    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:41:27.861615    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:41:27.871857    9380 logs.go:282] 2 containers: [ce6b803dfca6 bdc1c8409249]
	I1216 03:41:27.871931    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:41:27.884649    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:41:27.884732    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:41:27.895236    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:41:27.895315    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:41:27.905498    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:41:27.905581    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:41:27.920816    9380 logs.go:282] 0 containers: []
	W1216 03:41:27.920828    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:41:27.920895    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:41:27.935480    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:41:27.935497    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:41:27.935502    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:41:27.953022    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:41:27.953033    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:41:27.965169    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:41:27.965179    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:41:27.976923    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:41:27.976934    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:41:28.012271    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:41:28.012277    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:41:28.045572    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:41:28.045583    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:41:28.060191    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:41:28.060201    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:41:28.072083    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:41:28.072095    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:41:28.087901    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:41:28.087912    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:41:28.112961    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:41:28.112973    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:41:28.118496    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:41:28.118508    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:41:28.132426    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:41:28.132440    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:41:28.143846    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:41:28.143857    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:41:30.664715    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:35.667044    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:35.667282    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:41:35.685813    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:41:35.685904    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:41:35.697915    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:41:35.697991    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:41:35.708967    9380 logs.go:282] 2 containers: [ce6b803dfca6 bdc1c8409249]
	I1216 03:41:35.709038    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:41:35.719551    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:41:35.719628    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:41:35.729676    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:41:35.729759    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:41:35.740616    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:41:35.740695    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:41:35.756750    9380 logs.go:282] 0 containers: []
	W1216 03:41:35.756762    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:41:35.756829    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:41:35.767010    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:41:35.767027    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:41:35.767032    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:41:35.802787    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:41:35.802800    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:41:35.817023    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:41:35.817033    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:41:35.830354    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:41:35.830365    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:41:35.841836    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:41:35.841846    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:41:35.859365    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:41:35.859376    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:41:35.870818    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:41:35.870833    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:41:35.882447    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:41:35.882458    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:41:35.915553    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:41:35.915563    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:41:35.920524    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:41:35.920535    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:41:35.936050    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:41:35.936065    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:41:35.948073    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:41:35.948088    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:41:35.963237    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:41:35.963247    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:41:38.488229    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:43.490357    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:43.490660    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:41:43.512269    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:41:43.512375    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:41:43.527211    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:41:43.527293    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:41:43.539687    9380 logs.go:282] 2 containers: [ce6b803dfca6 bdc1c8409249]
	I1216 03:41:43.539767    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:41:43.550911    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:41:43.550988    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:41:43.561347    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:41:43.561417    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:41:43.571968    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:41:43.572042    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:41:43.582044    9380 logs.go:282] 0 containers: []
	W1216 03:41:43.582057    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:41:43.582123    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:41:43.593108    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:41:43.593122    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:41:43.593128    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:41:43.626531    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:41:43.626544    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:41:43.640129    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:41:43.640139    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:41:43.651391    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:41:43.651406    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:41:43.663404    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:41:43.663415    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:41:43.681888    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:41:43.681901    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:41:43.692921    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:41:43.692932    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:41:43.706993    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:41:43.707004    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:41:43.711682    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:41:43.711689    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:41:43.745594    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:41:43.745606    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:41:43.761539    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:41:43.761551    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:41:43.772973    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:41:43.772986    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:41:43.787485    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:41:43.787495    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:41:46.315067    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:51.317212    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:51.317355    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:41:51.331372    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:41:51.331463    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:41:51.347129    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:41:51.347204    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:41:51.357577    9380 logs.go:282] 2 containers: [ce6b803dfca6 bdc1c8409249]
	I1216 03:41:51.357656    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:41:51.367938    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:41:51.368009    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:41:51.378421    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:41:51.378497    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:41:51.389000    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:41:51.389072    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:41:51.399732    9380 logs.go:282] 0 containers: []
	W1216 03:41:51.399742    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:41:51.399802    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:41:51.410520    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:41:51.410537    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:41:51.410543    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:41:51.444984    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:41:51.444995    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:41:51.449722    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:41:51.449730    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:41:51.484210    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:41:51.484222    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:41:51.498464    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:41:51.498478    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:41:51.512296    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:41:51.512309    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:41:51.524116    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:41:51.524125    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:41:51.548338    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:41:51.548347    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:41:51.560776    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:41:51.560787    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:41:51.575373    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:41:51.575385    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:41:51.586846    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:41:51.586859    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:41:51.608957    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:41:51.608967    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:41:51.621545    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:41:51.621556    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:41:54.135798    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:41:59.137966    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:41:59.138265    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:41:59.161111    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:41:59.161251    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:41:59.179230    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:41:59.179321    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:41:59.192267    9380 logs.go:282] 2 containers: [ce6b803dfca6 bdc1c8409249]
	I1216 03:41:59.192354    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:41:59.211292    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:41:59.211374    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:41:59.221585    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:41:59.221666    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:41:59.232450    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:41:59.232532    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:41:59.245418    9380 logs.go:282] 0 containers: []
	W1216 03:41:59.245430    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:41:59.245496    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:41:59.255513    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:41:59.255531    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:41:59.255537    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:41:59.267098    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:41:59.267108    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:41:59.278360    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:41:59.278373    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:41:59.290048    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:41:59.290058    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:41:59.294228    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:41:59.294235    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:41:59.309943    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:41:59.309953    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:41:59.324160    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:41:59.324170    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:41:59.338606    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:41:59.338618    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:41:59.350189    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:41:59.350201    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:41:59.373419    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:41:59.373430    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:41:59.396810    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:41:59.396818    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:41:59.429426    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:41:59.429436    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:41:59.464990    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:41:59.465003    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:42:01.978604    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:06.980942    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:06.981468    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:07.017667    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:42:07.017820    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:07.037082    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:42:07.037177    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:07.052485    9380 logs.go:282] 2 containers: [ce6b803dfca6 bdc1c8409249]
	I1216 03:42:07.052575    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:07.064934    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:42:07.065016    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:07.075481    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:42:07.075563    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:07.085968    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:42:07.086045    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:07.096544    9380 logs.go:282] 0 containers: []
	W1216 03:42:07.096557    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:07.096621    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:07.107112    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:42:07.107129    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:42:07.107135    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:42:07.121898    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:42:07.121911    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:42:07.142998    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:42:07.143009    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:42:07.155164    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:07.155178    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:07.178824    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:42:07.178837    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:07.191087    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:07.191099    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:07.224025    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:07.224037    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:07.259713    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:42:07.259723    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:42:07.275592    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:42:07.275603    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:42:07.287368    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:42:07.287380    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:42:07.299315    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:42:07.299325    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:42:07.311476    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:07.311487    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:07.316529    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:42:07.316538    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:42:09.833589    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:14.835753    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:14.835947    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:14.858458    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:42:14.858535    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:14.869410    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:42:14.869490    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:14.879695    9380 logs.go:282] 2 containers: [ce6b803dfca6 bdc1c8409249]
	I1216 03:42:14.879769    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:14.890849    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:42:14.890933    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:14.901257    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:42:14.901338    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:14.911362    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:42:14.911432    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:14.921450    9380 logs.go:282] 0 containers: []
	W1216 03:42:14.921467    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:14.921524    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:14.931833    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:42:14.931850    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:42:14.931856    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:42:14.945741    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:42:14.945751    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:42:14.963334    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:42:14.963346    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:42:14.974950    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:42:14.974965    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:42:14.986707    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:42:14.986718    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:42:14.999147    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:42:14.999160    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:42:15.010406    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:15.010419    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:15.045347    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:15.045355    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:15.049858    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:15.049865    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:15.072668    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:42:15.072675    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:15.084816    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:42:15.084833    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:42:15.102508    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:15.102519    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:15.138462    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:42:15.138476    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:42:17.654965    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:22.657128    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:22.657365    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:22.684357    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:42:22.684441    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:22.699634    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:42:22.699717    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:22.709937    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:42:22.710020    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:22.723885    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:42:22.723964    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:22.734382    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:42:22.734461    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:22.745092    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:42:22.745170    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:22.756686    9380 logs.go:282] 0 containers: []
	W1216 03:42:22.756699    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:22.756770    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:22.767405    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:42:22.767422    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:22.767428    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:22.800885    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:22.800901    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:22.837906    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:42:22.837920    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:42:22.849074    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:42:22.849089    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:22.864647    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:42:22.864659    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:42:22.879061    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:42:22.879073    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:42:22.891962    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:22.891973    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:22.917994    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:22.918006    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:22.922195    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:42:22.922202    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:42:22.936398    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:42:22.936409    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:42:22.957454    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:42:22.957465    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:42:22.969983    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:42:22.969995    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:42:22.984743    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:42:22.984753    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:42:23.000572    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:42:23.000587    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:42:23.012986    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:42:23.012999    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:42:25.526966    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:30.529148    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:30.529318    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:30.544185    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:42:30.544282    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:30.555877    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:42:30.555963    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:30.567053    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:42:30.567136    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:30.577262    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:42:30.577336    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:30.588279    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:42:30.588362    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:30.599185    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:42:30.599265    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:30.609156    9380 logs.go:282] 0 containers: []
	W1216 03:42:30.609167    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:30.609226    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:30.620095    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:42:30.620119    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:30.620125    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:30.625209    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:30.625219    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:30.660437    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:42:30.660445    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:42:30.671939    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:42:30.671953    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:42:30.686588    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:42:30.686600    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:42:30.702719    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:42:30.702735    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:42:30.713844    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:42:30.713860    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:42:30.725239    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:42:30.725252    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:42:30.742714    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:42:30.742727    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:30.754661    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:42:30.754676    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:42:30.769519    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:42:30.769529    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:42:30.783494    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:42:30.783508    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:42:30.795081    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:42:30.795091    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:42:30.806590    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:30.806604    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:30.830800    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:30.830811    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:33.367890    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:38.370042    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:38.370224    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:38.381314    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:42:38.381393    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:38.393937    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:42:38.394019    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:38.405418    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:42:38.405498    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:38.416121    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:42:38.416195    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:38.431004    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:42:38.431080    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:38.441443    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:42:38.441515    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:38.451303    9380 logs.go:282] 0 containers: []
	W1216 03:42:38.451313    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:38.451381    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:38.461864    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:42:38.461882    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:42:38.461889    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:42:38.473273    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:42:38.473287    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:38.484864    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:38.484875    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:38.489654    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:42:38.489661    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:42:38.501323    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:42:38.501335    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:42:38.516129    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:42:38.516140    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:42:38.533018    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:38.533032    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:38.567260    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:42:38.567274    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:42:38.585377    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:42:38.585388    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:42:38.596753    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:38.596764    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:38.622482    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:38.622491    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:38.657009    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:42:38.657018    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:42:38.668828    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:42:38.668841    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:42:38.681286    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:42:38.681297    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:42:38.695331    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:42:38.695341    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:42:41.209513    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:46.211303    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:46.211592    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:46.229303    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:42:46.229409    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:46.242825    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:42:46.242908    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:46.254661    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:42:46.254744    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:46.265660    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:42:46.265745    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:46.276332    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:42:46.276413    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:46.286987    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:42:46.287067    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:46.301445    9380 logs.go:282] 0 containers: []
	W1216 03:42:46.301458    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:46.301535    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:46.312453    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:42:46.312469    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:42:46.312475    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:42:46.327155    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:46.327166    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:46.351994    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:46.352005    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:46.385832    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:42:46.385848    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:42:46.399557    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:42:46.399567    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:42:46.411370    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:42:46.411381    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:42:46.422738    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:42:46.422748    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:42:46.438938    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:42:46.438948    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:42:46.450911    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:42:46.450922    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:46.471406    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:46.471417    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:46.476146    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:46.476155    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:46.511701    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:42:46.511712    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:42:46.523724    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:42:46.523737    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:42:46.538681    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:42:46.538691    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:42:46.556251    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:42:46.556263    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:42:49.070158    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:42:54.072424    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:42:54.072631    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:42:54.089327    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:42:54.089426    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:42:54.102286    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:42:54.102368    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:42:54.113310    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:42:54.113399    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:42:54.123707    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:42:54.123788    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:42:54.133960    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:42:54.134048    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:42:54.144806    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:42:54.144892    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:42:54.154916    9380 logs.go:282] 0 containers: []
	W1216 03:42:54.154931    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:54.154996    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:42:54.166742    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:42:54.166763    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:42:54.166769    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:42:54.180794    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:42:54.180809    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:42:54.201598    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:54.201611    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:42:54.236335    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:42:54.236347    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:42:54.251666    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:42:54.251683    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:42:54.263326    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:42:54.263341    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:42:54.274520    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:42:54.274531    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:42:54.285882    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:42:54.285897    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:42:54.297418    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:42:54.297428    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:42:54.321583    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:54.321592    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:54.325902    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:42:54.325910    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:42:54.337976    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:42:54.337989    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:42:54.352716    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:42:54.352728    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:42:54.363953    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:42:54.363963    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:54.375776    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:54.375791    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:56.913598    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:01.915940    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:01.916248    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:01.939939    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:43:01.940061    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:01.955960    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:43:01.956061    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:01.970057    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:43:01.970149    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:01.982291    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:43:01.982369    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:01.992742    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:43:01.992830    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:02.003479    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:43:02.003558    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:02.013246    9380 logs.go:282] 0 containers: []
	W1216 03:43:02.013259    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:02.013325    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:02.028882    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:43:02.028901    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:43:02.028907    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:43:02.047236    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:02.047248    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:02.082528    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:02.082540    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:02.086903    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:43:02.086913    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:43:02.098716    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:43:02.098728    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:43:02.110189    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:43:02.110200    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:02.122316    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:43:02.122327    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:43:02.134659    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:02.134670    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:02.176231    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:43:02.176243    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:43:02.189985    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:43:02.189997    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:43:02.202010    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:43:02.202021    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:43:02.214114    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:43:02.214125    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:43:02.229054    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:43:02.229065    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:43:02.244542    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:43:02.244551    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:43:02.256838    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:02.256849    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:04.784216    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:09.786483    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:09.786598    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:09.799536    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:43:09.799626    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:09.810734    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:43:09.810815    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:09.822021    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:43:09.822100    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:09.832710    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:43:09.832788    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:09.842889    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:43:09.842963    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:09.853308    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:43:09.853388    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:09.863436    9380 logs.go:282] 0 containers: []
	W1216 03:43:09.863446    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:09.863510    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:09.873891    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:43:09.873907    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:09.873912    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:09.907741    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:09.907758    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:09.942463    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:43:09.942477    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:43:09.954585    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:43:09.954597    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:43:09.965738    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:43:09.965749    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:43:09.980852    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:43:09.980861    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:43:09.995116    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:43:09.995129    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:43:10.014747    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:43:10.014757    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:43:10.026595    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:43:10.026603    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:43:10.039045    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:10.039055    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:10.044316    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:10.044328    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:10.070765    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:43:10.070790    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:10.084196    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:43:10.084206    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:43:10.110162    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:43:10.110177    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:43:10.125636    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:43:10.125647    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:43:12.646861    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:17.648977    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:17.649248    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:17.670272    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:43:17.670376    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:17.684110    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:43:17.684194    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:17.695253    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:43:17.695334    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:17.710297    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:43:17.710382    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:17.720592    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:43:17.720675    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:17.731620    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:43:17.731697    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:17.741731    9380 logs.go:282] 0 containers: []
	W1216 03:43:17.741742    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:17.741813    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:17.753097    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:43:17.753117    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:43:17.753123    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:43:17.767487    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:43:17.767500    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:43:17.780243    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:43:17.780254    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:43:17.793856    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:43:17.793870    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:43:17.805250    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:17.805264    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:17.830703    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:43:17.830716    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:17.843241    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:17.843256    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:17.880356    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:17.880367    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:17.916556    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:43:17.916567    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:43:17.934478    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:43:17.934489    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:43:17.951027    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:43:17.951039    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:43:17.963281    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:43:17.963293    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:43:17.975166    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:43:17.975177    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:43:17.993858    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:17.993870    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:17.998134    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:43:17.998142    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:43:20.515426    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:25.516464    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:25.516560    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:25.530071    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:43:25.530157    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:25.541832    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:43:25.541911    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:25.555268    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:43:25.555355    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:25.574774    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:43:25.574859    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:25.586112    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:43:25.586192    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:25.597797    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:43:25.597880    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:25.609040    9380 logs.go:282] 0 containers: []
	W1216 03:43:25.609055    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:25.609129    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:25.620543    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:43:25.620561    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:43:25.620567    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:43:25.640088    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:25.640102    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:25.678799    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:43:25.678816    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:43:25.691620    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:43:25.691630    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:43:25.705176    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:43:25.705186    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:43:25.718139    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:43:25.718148    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:43:25.733083    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:43:25.733094    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:43:25.745479    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:43:25.745488    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:43:25.762545    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:25.762555    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:25.795799    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:25.795812    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:25.800225    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:43:25.800233    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:43:25.815099    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:43:25.815110    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:25.828459    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:43:25.828470    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:43:25.846795    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:43:25.846805    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:43:25.859444    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:25.859454    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:28.386597    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:33.388692    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:33.388828    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:33.405551    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:43:33.405642    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:33.416634    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:43:33.416707    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:33.427860    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:43:33.427938    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:33.438214    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:43:33.438294    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:33.448575    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:43:33.448643    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:33.459215    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:43:33.459282    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:33.469462    9380 logs.go:282] 0 containers: []
	W1216 03:43:33.469477    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:33.469548    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:33.489620    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:43:33.489638    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:43:33.489644    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:43:33.501772    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:33.501782    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:33.506112    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:33.506119    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:33.542584    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:43:33.542594    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:43:33.556427    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:43:33.556437    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:43:33.567981    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:43:33.567993    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:43:33.579349    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:43:33.579360    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:43:33.593400    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:43:33.593411    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:43:33.608166    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:43:33.608177    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:43:33.620153    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:33.620163    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:33.645506    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:33.645515    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:33.679025    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:43:33.679046    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:43:33.692008    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:43:33.692019    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:43:33.710330    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:43:33.710343    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:33.724166    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:43:33.724177    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:43:36.238565    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:41.240661    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:41.240753    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:41.251995    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:43:41.252075    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:41.264268    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:43:41.264351    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:41.275646    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:43:41.275732    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:41.286926    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:43:41.287013    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:41.297484    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:43:41.297552    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:41.308934    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:43:41.309011    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:41.318938    9380 logs.go:282] 0 containers: []
	W1216 03:43:41.318951    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:41.319018    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:41.329472    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:43:41.329495    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:43:41.329501    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:43:41.341492    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:43:41.341508    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:43:41.356335    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:43:41.356345    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:43:41.368320    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:43:41.368332    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:43:41.386480    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:41.386491    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:41.411426    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:43:41.411435    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:43:41.423464    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:41.423475    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:41.458618    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:43:41.458628    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:43:41.472726    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:43:41.472736    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:43:41.486600    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:43:41.486611    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:43:41.498491    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:43:41.498502    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:41.510759    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:41.510774    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:41.546329    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:41.546339    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:41.551148    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:43:41.551155    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:43:41.562980    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:43:41.562991    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:43:44.076493    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:49.078779    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:49.079007    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:49.093774    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:43:49.093872    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:49.105850    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:43:49.105929    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:49.116216    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:43:49.116290    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:49.130225    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:43:49.130311    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:49.141739    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:43:49.141810    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:49.151943    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:43:49.152018    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:49.168672    9380 logs.go:282] 0 containers: []
	W1216 03:43:49.168689    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:49.168753    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:49.179821    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:43:49.179837    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:43:49.179843    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:43:49.191277    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:43:49.191288    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:43:49.208279    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:49.208289    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:49.232186    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:43:49.232196    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:43:49.246265    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:43:49.246275    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:43:49.258029    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:43:49.258043    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:43:49.269479    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:43:49.269489    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:43:49.283772    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:49.283782    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:49.319684    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:49.319698    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:49.324367    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:43:49.324374    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:43:49.339415    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:43:49.339428    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:43:49.351652    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:43:49.351668    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:43:49.369760    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:43:49.369770    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:43:49.381014    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:43:49.381028    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:49.392630    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:49.392641    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:51.927490    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:43:56.929599    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:43:56.929813    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:43:56.942500    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:43:56.942581    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:43:56.953074    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:43:56.953156    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:43:56.964364    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:43:56.964448    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:43:56.974580    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:43:56.974650    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:43:56.984980    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:43:56.985067    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:43:56.995256    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:43:56.995329    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:43:57.005550    9380 logs.go:282] 0 containers: []
	W1216 03:43:57.005562    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:57.005629    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:43:57.016017    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:43:57.016036    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:43:57.016043    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:43:57.030829    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:43:57.030840    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:43:57.042828    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:43:57.042839    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:43:57.053936    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:43:57.053947    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:57.065960    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:43:57.065972    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:43:57.080048    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:43:57.080058    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:43:57.091969    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:43:57.091981    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:43:57.103877    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:43:57.103887    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:43:57.122399    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:57.122408    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:43:57.157763    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:43:57.157775    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:43:57.174520    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:57.174533    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:57.209371    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:57.209380    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:57.213492    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:43:57.213500    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:43:57.234365    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:43:57.234378    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:43:57.255962    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:43:57.255974    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:43:59.782344    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:44:04.784523    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:44:04.784732    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:44:04.802212    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:44:04.802306    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:44:04.816917    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:44:04.817007    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:44:04.828576    9380 logs.go:282] 4 containers: [787b6f58b230 a1d385fb248d ce6b803dfca6 bdc1c8409249]
	I1216 03:44:04.828661    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:44:04.839548    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:44:04.839626    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:44:04.850352    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:44:04.850431    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:44:04.861248    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:44:04.861327    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:44:04.871677    9380 logs.go:282] 0 containers: []
	W1216 03:44:04.871688    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:04.871746    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:44:04.882604    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:44:04.882618    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:04.882624    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:44:04.917878    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:44:04.917890    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:44:04.929490    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:44:04.929502    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:44:04.941123    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:04.941137    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:04.977977    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:04.977999    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:04.983542    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:44:04.983553    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:44:04.997950    9380 logs.go:123] Gathering logs for coredns [ce6b803dfca6] ...
	I1216 03:44:04.997962    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6b803dfca6"
	I1216 03:44:05.009982    9380 logs.go:123] Gathering logs for coredns [bdc1c8409249] ...
	I1216 03:44:05.009996    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdc1c8409249"
	I1216 03:44:05.021216    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:44:05.021226    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:44:05.036115    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:44:05.036131    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:44:05.054017    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:44:05.054027    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:44:05.068228    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:44:05.068239    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:44:05.080158    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:44:05.080169    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:44:05.091821    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:44:05.091831    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:44:05.116793    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:44:05.116802    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:07.630232    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:44:12.632561    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:44:12.632765    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 03:44:12.652770    9380 logs.go:282] 1 containers: [d3f425b205d7]
	I1216 03:44:12.652870    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 03:44:12.673071    9380 logs.go:282] 1 containers: [6b9c63ab5700]
	I1216 03:44:12.673155    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 03:44:12.684405    9380 logs.go:282] 4 containers: [9ab6cc04ba76 d967b3bd4403 787b6f58b230 a1d385fb248d]
	I1216 03:44:12.684484    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 03:44:12.695118    9380 logs.go:282] 1 containers: [a3bb92f2b65b]
	I1216 03:44:12.695194    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 03:44:12.706169    9380 logs.go:282] 1 containers: [bdb1d2530097]
	I1216 03:44:12.706245    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 03:44:12.716986    9380 logs.go:282] 1 containers: [55d908ffe7f6]
	I1216 03:44:12.717053    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 03:44:12.727627    9380 logs.go:282] 0 containers: []
	W1216 03:44:12.727643    9380 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:12.727705    9380 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 03:44:12.738780    9380 logs.go:282] 1 containers: [c89f5e8cf1c9]
	I1216 03:44:12.738798    9380 logs.go:123] Gathering logs for kube-apiserver [d3f425b205d7] ...
	I1216 03:44:12.738805    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3f425b205d7"
	I1216 03:44:12.753695    9380 logs.go:123] Gathering logs for coredns [9ab6cc04ba76] ...
	I1216 03:44:12.753706    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ab6cc04ba76"
	I1216 03:44:12.764439    9380 logs.go:123] Gathering logs for coredns [787b6f58b230] ...
	I1216 03:44:12.764451    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 787b6f58b230"
	I1216 03:44:12.775994    9380 logs.go:123] Gathering logs for storage-provisioner [c89f5e8cf1c9] ...
	I1216 03:44:12.776024    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89f5e8cf1c9"
	I1216 03:44:12.788256    9380 logs.go:123] Gathering logs for Docker ...
	I1216 03:44:12.788268    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 03:44:12.812267    9380 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:12.812278    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:12.845149    9380 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:12.845161    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 03:44:12.885379    9380 logs.go:123] Gathering logs for etcd [6b9c63ab5700] ...
	I1216 03:44:12.885390    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b9c63ab5700"
	I1216 03:44:12.899638    9380 logs.go:123] Gathering logs for kube-controller-manager [55d908ffe7f6] ...
	I1216 03:44:12.899649    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55d908ffe7f6"
	I1216 03:44:12.917751    9380 logs.go:123] Gathering logs for kube-proxy [bdb1d2530097] ...
	I1216 03:44:12.917760    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb1d2530097"
	I1216 03:44:12.929651    9380 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:12.929661    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:12.934257    9380 logs.go:123] Gathering logs for kube-scheduler [a3bb92f2b65b] ...
	I1216 03:44:12.934264    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3bb92f2b65b"
	I1216 03:44:12.949426    9380 logs.go:123] Gathering logs for container status ...
	I1216 03:44:12.949438    9380 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:12.961934    9380 logs.go:123] Gathering logs for coredns [d967b3bd4403] ...
	I1216 03:44:12.961946    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d967b3bd4403"
	I1216 03:44:12.973124    9380 logs.go:123] Gathering logs for coredns [a1d385fb248d] ...
	I1216 03:44:12.973136    9380 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1d385fb248d"
	I1216 03:44:15.486498    9380 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 03:44:20.488736    9380 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 03:44:20.493150    9380 out.go:201] 
	W1216 03:44:20.496218    9380 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1216 03:44:20.496225    9380 out.go:270] * 
	* 
	W1216 03:44:20.497366    9380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:44:20.506134    9380 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-873000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (585.60s)

                                                
                                    
x
+
TestPause/serial/Start (10.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-551000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-551000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.983361333s)

                                                
                                                
-- stdout --
	* [pause-551000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-551000" primary control-plane node in "pause-551000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-551000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-551000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-551000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-551000 -n pause-551000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-551000 -n pause-551000: exit status 7 (76.33325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-551000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-850000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-850000 --driver=qemu2 : exit status 80 (10.148489708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-850000" primary control-plane node in "NoKubernetes-850000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-850000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-850000 -n NoKubernetes-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-850000 -n NoKubernetes-850000: exit status 7 (38.43175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-850000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-850000 --no-kubernetes --driver=qemu2 : exit status 80 (7.627688209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-850000
	* Restarting existing qemu2 VM for "NoKubernetes-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-850000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-850000 -n NoKubernetes-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-850000 -n NoKubernetes-850000: exit status 7 (58.619417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-850000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-850000 --no-kubernetes --driver=qemu2 : exit status 80 (7.430793458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-850000
	* Restarting existing qemu2 VM for "NoKubernetes-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-850000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-850000 -n NoKubernetes-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-850000 -n NoKubernetes-850000: exit status 7 (40.654584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.47s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.88s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20107
- KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3568951452/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.88s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.37s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20107
- KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1235840847/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-850000 --driver=qemu2 
I1216 03:45:33.826263    7256 install.go:79] stdout: 
W1216 03:45:33.826458    7256 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1216 03:45:33.826479    7256 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit]
I1216 03:45:33.839867    7256 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit]
I1216 03:45:33.850882    7256 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit]
I1216 03:45:33.862119    7256 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit]
I1216 03:45:33.883113    7256 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 03:45:33.883208    7256 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1216 03:45:35.694017    7256 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1216 03:45:35.694035    7256 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1216 03:45:35.694085    7256 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1216 03:45:35.694119    7256 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/002/docker-machine-driver-hyperkit
I1216 03:45:36.096406    7256 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x106d89900 0x106d89900 0x106d89900 0x106d89900 0x106d89900 0x106d89900 0x106d89900] Decompressors:map[bz2:0x14000519fc0 gz:0x14000519fc8 tar:0x14000519f50 tar.bz2:0x14000519f60 tar.gz:0x14000519f80 tar.xz:0x14000519f90 tar.zst:0x14000519fa0 tbz2:0x14000519f60 tgz:0x14000519f80 txz:0x14000519f90 tzst:0x14000519fa0 xz:0x14000519fd0 zip:0x14000519ff0 zst:0x14000519fd8] Getters:map[file:0x1400152cad0 http:0x14000d14500 https:0x14000d14550] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1216 03:45:36.096528    7256 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/002/docker-machine-driver-hyperkit
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-850000 --driver=qemu2 : exit status 80 (5.303119167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-850000
	* Restarting existing qemu2 VM for "NoKubernetes-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-850000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-850000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-850000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-850000 -n NoKubernetes-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-850000 -n NoKubernetes-850000: exit status 7 (70.280083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.862565583s)

                                                
                                                
-- stdout --
	* [auto-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-989000" primary control-plane node in "auto-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:46:11.834760    9835 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:46:11.834927    9835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:46:11.834930    9835 out.go:358] Setting ErrFile to fd 2...
	I1216 03:46:11.834932    9835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:46:11.835065    9835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:46:11.836204    9835 out.go:352] Setting JSON to false
	I1216 03:46:11.853897    9835 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6342,"bootTime":1734343229,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:46:11.853978    9835 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:46:11.861076    9835 out.go:177] * [auto-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:46:11.869857    9835 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:46:11.869902    9835 notify.go:220] Checking for updates...
	I1216 03:46:11.877823    9835 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:46:11.880861    9835 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:46:11.884734    9835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:46:11.887832    9835 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:46:11.890914    9835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:46:11.894137    9835 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:46:11.894214    9835 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:46:11.894266    9835 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:46:11.898925    9835 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:46:11.904825    9835 start.go:297] selected driver: qemu2
	I1216 03:46:11.904831    9835 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:46:11.904837    9835 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:46:11.907499    9835 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:46:11.910843    9835 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:46:11.913933    9835 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:46:11.913949    9835 cni.go:84] Creating CNI manager for ""
	I1216 03:46:11.913970    9835 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:46:11.913975    9835 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:46:11.914005    9835 start.go:340] cluster config:
	{Name:auto-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:46:11.918989    9835 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:46:11.926905    9835 out.go:177] * Starting "auto-989000" primary control-plane node in "auto-989000" cluster
	I1216 03:46:11.930868    9835 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:46:11.930886    9835 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:46:11.930894    9835 cache.go:56] Caching tarball of preloaded images
	I1216 03:46:11.930968    9835 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:46:11.930974    9835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:46:11.931030    9835 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/auto-989000/config.json ...
	I1216 03:46:11.931042    9835 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/auto-989000/config.json: {Name:mke7947edd3ed50ced06fcba2a4dedef7b1fa694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:46:11.931533    9835 start.go:360] acquireMachinesLock for auto-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:46:11.931585    9835 start.go:364] duration metric: took 45.625µs to acquireMachinesLock for "auto-989000"
	I1216 03:46:11.931596    9835 start.go:93] Provisioning new machine with config: &{Name:auto-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:46:11.931631    9835 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:46:11.934924    9835 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:46:11.952425    9835 start.go:159] libmachine.API.Create for "auto-989000" (driver="qemu2")
	I1216 03:46:11.952454    9835 client.go:168] LocalClient.Create starting
	I1216 03:46:11.952523    9835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:46:11.952561    9835 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:11.952577    9835 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:11.952613    9835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:46:11.952643    9835 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:11.952654    9835 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:11.953091    9835 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:46:12.125707    9835 main.go:141] libmachine: Creating SSH key...
	I1216 03:46:12.187630    9835 main.go:141] libmachine: Creating Disk image...
	I1216 03:46:12.187635    9835 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:46:12.187866    9835 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2
	I1216 03:46:12.197678    9835 main.go:141] libmachine: STDOUT: 
	I1216 03:46:12.197700    9835 main.go:141] libmachine: STDERR: 
	I1216 03:46:12.197752    9835 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2 +20000M
	I1216 03:46:12.206437    9835 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:46:12.206454    9835 main.go:141] libmachine: STDERR: 
	I1216 03:46:12.206474    9835 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2
	I1216 03:46:12.206479    9835 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:46:12.206490    9835 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:46:12.206518    9835 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:c1:c7:38:58:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2
	I1216 03:46:12.208341    9835 main.go:141] libmachine: STDOUT: 
	I1216 03:46:12.208357    9835 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:46:12.208376    9835 client.go:171] duration metric: took 255.920417ms to LocalClient.Create
	I1216 03:46:14.210523    9835 start.go:128] duration metric: took 2.278911375s to createHost
	I1216 03:46:14.210604    9835 start.go:83] releasing machines lock for "auto-989000", held for 2.279051333s
	W1216 03:46:14.210673    9835 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:14.225004    9835 out.go:177] * Deleting "auto-989000" in qemu2 ...
	W1216 03:46:14.258282    9835 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:14.258313    9835 start.go:729] Will try again in 5 seconds ...
	I1216 03:46:19.260403    9835 start.go:360] acquireMachinesLock for auto-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:46:19.261029    9835 start.go:364] duration metric: took 519µs to acquireMachinesLock for "auto-989000"
	I1216 03:46:19.261163    9835 start.go:93] Provisioning new machine with config: &{Name:auto-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:46:19.261414    9835 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:46:19.267140    9835 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:46:19.318161    9835 start.go:159] libmachine.API.Create for "auto-989000" (driver="qemu2")
	I1216 03:46:19.318225    9835 client.go:168] LocalClient.Create starting
	I1216 03:46:19.318365    9835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:46:19.318453    9835 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:19.318471    9835 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:19.318544    9835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:46:19.318602    9835 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:19.318615    9835 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:19.319277    9835 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:46:19.511411    9835 main.go:141] libmachine: Creating SSH key...
	I1216 03:46:19.597020    9835 main.go:141] libmachine: Creating Disk image...
	I1216 03:46:19.597025    9835 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:46:19.597249    9835 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2
	I1216 03:46:19.607204    9835 main.go:141] libmachine: STDOUT: 
	I1216 03:46:19.607239    9835 main.go:141] libmachine: STDERR: 
	I1216 03:46:19.607305    9835 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2 +20000M
	I1216 03:46:19.615796    9835 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:46:19.615819    9835 main.go:141] libmachine: STDERR: 
	I1216 03:46:19.615831    9835 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2
	I1216 03:46:19.615836    9835 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:46:19.615846    9835 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:46:19.615885    9835 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:d3:03:98:58:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/auto-989000/disk.qcow2
	I1216 03:46:19.617672    9835 main.go:141] libmachine: STDOUT: 
	I1216 03:46:19.617692    9835 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:46:19.617705    9835 client.go:171] duration metric: took 299.480417ms to LocalClient.Create
	I1216 03:46:21.619843    9835 start.go:128] duration metric: took 2.358443208s to createHost
	I1216 03:46:21.619915    9835 start.go:83] releasing machines lock for "auto-989000", held for 2.358906125s
	W1216 03:46:21.620224    9835 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:21.633643    9835 out.go:201] 
	W1216 03:46:21.638120    9835 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:46:21.638143    9835 out.go:270] * 
	* 
	W1216 03:46:21.640645    9835 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:46:21.652998    9835 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.07745575s)

                                                
                                                
-- stdout --
	* [kindnet-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-989000" primary control-plane node in "kindnet-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:46:24.054688    9947 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:46:24.054859    9947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:46:24.054862    9947 out.go:358] Setting ErrFile to fd 2...
	I1216 03:46:24.054865    9947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:46:24.054988    9947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:46:24.056224    9947 out.go:352] Setting JSON to false
	I1216 03:46:24.074194    9947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6355,"bootTime":1734343229,"procs":569,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:46:24.074274    9947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:46:24.080655    9947 out.go:177] * [kindnet-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:46:24.089453    9947 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:46:24.089503    9947 notify.go:220] Checking for updates...
	I1216 03:46:24.098379    9947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:46:24.101456    9947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:46:24.104423    9947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:46:24.107424    9947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:46:24.110431    9947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:46:24.113800    9947 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:46:24.113897    9947 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:46:24.113949    9947 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:46:24.117382    9947 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:46:24.124485    9947 start.go:297] selected driver: qemu2
	I1216 03:46:24.124492    9947 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:46:24.124499    9947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:46:24.126997    9947 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:46:24.131417    9947 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:46:24.134463    9947 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:46:24.134479    9947 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:46:24.134483    9947 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:46:24.134510    9947 start.go:340] cluster config:
	{Name:kindnet-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:46:24.139385    9947 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:46:24.147448    9947 out.go:177] * Starting "kindnet-989000" primary control-plane node in "kindnet-989000" cluster
	I1216 03:46:24.151415    9947 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:46:24.151431    9947 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:46:24.151443    9947 cache.go:56] Caching tarball of preloaded images
	I1216 03:46:24.151515    9947 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:46:24.151521    9947 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:46:24.151583    9947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/kindnet-989000/config.json ...
	I1216 03:46:24.151595    9947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/kindnet-989000/config.json: {Name:mk67d979dddae95d7fe332a47b58eab49d4cfac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:46:24.152080    9947 start.go:360] acquireMachinesLock for kindnet-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:46:24.152131    9947 start.go:364] duration metric: took 45µs to acquireMachinesLock for "kindnet-989000"
	I1216 03:46:24.152142    9947 start.go:93] Provisioning new machine with config: &{Name:kindnet-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:46:24.152172    9947 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:46:24.161419    9947 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:46:24.179807    9947 start.go:159] libmachine.API.Create for "kindnet-989000" (driver="qemu2")
	I1216 03:46:24.179842    9947 client.go:168] LocalClient.Create starting
	I1216 03:46:24.179934    9947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:46:24.179978    9947 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:24.179993    9947 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:24.180040    9947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:46:24.180079    9947 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:24.180088    9947 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:24.180596    9947 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:46:24.353069    9947 main.go:141] libmachine: Creating SSH key...
	I1216 03:46:24.474710    9947 main.go:141] libmachine: Creating Disk image...
	I1216 03:46:24.474716    9947 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:46:24.474951    9947 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2
	I1216 03:46:24.484810    9947 main.go:141] libmachine: STDOUT: 
	I1216 03:46:24.484829    9947 main.go:141] libmachine: STDERR: 
	I1216 03:46:24.484882    9947 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2 +20000M
	I1216 03:46:24.493430    9947 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:46:24.493445    9947 main.go:141] libmachine: STDERR: 
	I1216 03:46:24.493460    9947 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2
	I1216 03:46:24.493465    9947 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:46:24.493476    9947 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:46:24.493508    9947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:2c:d9:10:b2:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2
	I1216 03:46:24.495344    9947 main.go:141] libmachine: STDOUT: 
	I1216 03:46:24.495364    9947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:46:24.495384    9947 client.go:171] duration metric: took 315.541542ms to LocalClient.Create
	I1216 03:46:26.497579    9947 start.go:128] duration metric: took 2.345367708s to createHost
	I1216 03:46:26.497696    9947 start.go:83] releasing machines lock for "kindnet-989000", held for 2.345545833s
	W1216 03:46:26.497749    9947 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:26.510002    9947 out.go:177] * Deleting "kindnet-989000" in qemu2 ...
	W1216 03:46:26.540340    9947 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:26.540358    9947 start.go:729] Will try again in 5 seconds ...
	I1216 03:46:31.542508    9947 start.go:360] acquireMachinesLock for kindnet-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:46:31.543012    9947 start.go:364] duration metric: took 414.667µs to acquireMachinesLock for "kindnet-989000"
	I1216 03:46:31.543123    9947 start.go:93] Provisioning new machine with config: &{Name:kindnet-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:46:31.543529    9947 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:46:31.549266    9947 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:46:31.595760    9947 start.go:159] libmachine.API.Create for "kindnet-989000" (driver="qemu2")
	I1216 03:46:31.595815    9947 client.go:168] LocalClient.Create starting
	I1216 03:46:31.595960    9947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:46:31.596034    9947 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:31.596050    9947 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:31.596156    9947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:46:31.596226    9947 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:31.596238    9947 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:31.597448    9947 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:46:31.789058    9947 main.go:141] libmachine: Creating SSH key...
	I1216 03:46:32.027338    9947 main.go:141] libmachine: Creating Disk image...
	I1216 03:46:32.027350    9947 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:46:32.027670    9947 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2
	I1216 03:46:32.038346    9947 main.go:141] libmachine: STDOUT: 
	I1216 03:46:32.038372    9947 main.go:141] libmachine: STDERR: 
	I1216 03:46:32.038431    9947 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2 +20000M
	I1216 03:46:32.046943    9947 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:46:32.046958    9947 main.go:141] libmachine: STDERR: 
	I1216 03:46:32.046970    9947 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2
	I1216 03:46:32.046978    9947 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:46:32.046986    9947 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:46:32.047020    9947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:78:b5:95:51:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kindnet-989000/disk.qcow2
	I1216 03:46:32.048815    9947 main.go:141] libmachine: STDOUT: 
	I1216 03:46:32.048833    9947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:46:32.048846    9947 client.go:171] duration metric: took 453.032458ms to LocalClient.Create
	I1216 03:46:34.050987    9947 start.go:128] duration metric: took 2.507442167s to createHost
	I1216 03:46:34.051034    9947 start.go:83] releasing machines lock for "kindnet-989000", held for 2.508040167s
	W1216 03:46:34.051349    9947 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:34.067152    9947 out.go:201] 
	W1216 03:46:34.072122    9947 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:46:34.072158    9947 out.go:270] * 
	* 
	W1216 03:46:34.074511    9947 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:46:34.085112    9947 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.185864958s)

                                                
                                                
-- stdout --
	* [calico-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-989000" primary control-plane node in "calico-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:46:36.587390   10060 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:46:36.587538   10060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:46:36.587541   10060 out.go:358] Setting ErrFile to fd 2...
	I1216 03:46:36.587544   10060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:46:36.587659   10060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:46:36.588822   10060 out.go:352] Setting JSON to false
	I1216 03:46:36.606873   10060 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6367,"bootTime":1734343229,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:46:36.606937   10060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:46:36.614267   10060 out.go:177] * [calico-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:46:36.622246   10060 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:46:36.622301   10060 notify.go:220] Checking for updates...
	I1216 03:46:36.630199   10060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:46:36.634193   10060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:46:36.637226   10060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:46:36.640236   10060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:46:36.643173   10060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:46:36.646576   10060 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:46:36.646648   10060 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:46:36.646700   10060 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:46:36.651222   10060 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:46:36.658147   10060 start.go:297] selected driver: qemu2
	I1216 03:46:36.658158   10060 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:46:36.658166   10060 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:46:36.660769   10060 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:46:36.663225   10060 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:46:36.664831   10060 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:46:36.664856   10060 cni.go:84] Creating CNI manager for "calico"
	I1216 03:46:36.664868   10060 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1216 03:46:36.664912   10060 start.go:340] cluster config:
	{Name:calico-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:46:36.669684   10060 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:46:36.678236   10060 out.go:177] * Starting "calico-989000" primary control-plane node in "calico-989000" cluster
	I1216 03:46:36.682194   10060 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:46:36.682211   10060 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:46:36.682226   10060 cache.go:56] Caching tarball of preloaded images
	I1216 03:46:36.682315   10060 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:46:36.682321   10060 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:46:36.682383   10060 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/calico-989000/config.json ...
	I1216 03:46:36.682394   10060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/calico-989000/config.json: {Name:mk288dea134f185803324cdf0837402bafa739b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:46:36.682853   10060 start.go:360] acquireMachinesLock for calico-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:46:36.682904   10060 start.go:364] duration metric: took 44.25µs to acquireMachinesLock for "calico-989000"
	I1216 03:46:36.682916   10060 start.go:93] Provisioning new machine with config: &{Name:calico-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:46:36.682945   10060 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:46:36.687037   10060 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:46:36.705572   10060 start.go:159] libmachine.API.Create for "calico-989000" (driver="qemu2")
	I1216 03:46:36.705597   10060 client.go:168] LocalClient.Create starting
	I1216 03:46:36.705691   10060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:46:36.705731   10060 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:36.705745   10060 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:36.705787   10060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:46:36.705818   10060 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:36.705829   10060 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:36.706283   10060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:46:36.879446   10060 main.go:141] libmachine: Creating SSH key...
	I1216 03:46:37.220762   10060 main.go:141] libmachine: Creating Disk image...
	I1216 03:46:37.220776   10060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:46:37.221024   10060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2
	I1216 03:46:37.231418   10060 main.go:141] libmachine: STDOUT: 
	I1216 03:46:37.231433   10060 main.go:141] libmachine: STDERR: 
	I1216 03:46:37.231496   10060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2 +20000M
	I1216 03:46:37.239990   10060 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:46:37.240004   10060 main.go:141] libmachine: STDERR: 
	I1216 03:46:37.240020   10060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2
	I1216 03:46:37.240025   10060 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:46:37.240036   10060 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:46:37.240067   10060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:31:f3:9b:98:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2
	I1216 03:46:37.241885   10060 main.go:141] libmachine: STDOUT: 
	I1216 03:46:37.241899   10060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:46:37.241918   10060 client.go:171] duration metric: took 536.323708ms to LocalClient.Create
	I1216 03:46:39.244068   10060 start.go:128] duration metric: took 2.561147791s to createHost
	I1216 03:46:39.244143   10060 start.go:83] releasing machines lock for "calico-989000", held for 2.561276959s
	W1216 03:46:39.244248   10060 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:39.254231   10060 out.go:177] * Deleting "calico-989000" in qemu2 ...
	W1216 03:46:39.288120   10060 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:39.288147   10060 start.go:729] Will try again in 5 seconds ...
	I1216 03:46:44.290287   10060 start.go:360] acquireMachinesLock for calico-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:46:44.290863   10060 start.go:364] duration metric: took 458.208µs to acquireMachinesLock for "calico-989000"
	I1216 03:46:44.290995   10060 start.go:93] Provisioning new machine with config: &{Name:calico-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:46:44.291267   10060 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:46:44.295943   10060 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:46:44.344829   10060 start.go:159] libmachine.API.Create for "calico-989000" (driver="qemu2")
	I1216 03:46:44.344877   10060 client.go:168] LocalClient.Create starting
	I1216 03:46:44.345004   10060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:46:44.345092   10060 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:44.345123   10060 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:44.345186   10060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:46:44.345255   10060 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:44.345269   10060 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:44.349431   10060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:46:44.537902   10060 main.go:141] libmachine: Creating SSH key...
	I1216 03:46:44.670389   10060 main.go:141] libmachine: Creating Disk image...
	I1216 03:46:44.670395   10060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:46:44.670625   10060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2
	I1216 03:46:44.681194   10060 main.go:141] libmachine: STDOUT: 
	I1216 03:46:44.681210   10060 main.go:141] libmachine: STDERR: 
	I1216 03:46:44.681275   10060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2 +20000M
	I1216 03:46:44.689707   10060 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:46:44.689721   10060 main.go:141] libmachine: STDERR: 
	I1216 03:46:44.689732   10060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2
	I1216 03:46:44.689736   10060 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:46:44.689750   10060 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:46:44.689794   10060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:6b:c4:e5:13:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/calico-989000/disk.qcow2
	I1216 03:46:44.691657   10060 main.go:141] libmachine: STDOUT: 
	I1216 03:46:44.691672   10060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:46:44.691684   10060 client.go:171] duration metric: took 346.8065ms to LocalClient.Create
	I1216 03:46:46.693820   10060 start.go:128] duration metric: took 2.402563417s to createHost
	I1216 03:46:46.693882   10060 start.go:83] releasing machines lock for "calico-989000", held for 2.403040792s
	W1216 03:46:46.694245   10060 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:46.703800   10060 out.go:201] 
	W1216 03:46:46.713852   10060 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:46:46.713876   10060 out.go:270] * 
	* 
	W1216 03:46:46.716401   10060 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:46:46.727737   10060 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.854718292s)

                                                
                                                
-- stdout --
	* [custom-flannel-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-989000" primary control-plane node in "custom-flannel-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:46:49.338659   10177 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:46:49.338826   10177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:46:49.338829   10177 out.go:358] Setting ErrFile to fd 2...
	I1216 03:46:49.338831   10177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:46:49.338990   10177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:46:49.340222   10177 out.go:352] Setting JSON to false
	I1216 03:46:49.358082   10177 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6380,"bootTime":1734343229,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:46:49.358150   10177 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:46:49.364794   10177 out.go:177] * [custom-flannel-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:46:49.373710   10177 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:46:49.373780   10177 notify.go:220] Checking for updates...
	I1216 03:46:49.382514   10177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:46:49.386711   10177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:46:49.389711   10177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:46:49.391222   10177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:46:49.394723   10177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:46:49.398116   10177 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:46:49.398219   10177 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:46:49.398260   10177 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:46:49.399973   10177 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:46:49.407673   10177 start.go:297] selected driver: qemu2
	I1216 03:46:49.407679   10177 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:46:49.407685   10177 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:46:49.410104   10177 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:46:49.412678   10177 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:46:49.416801   10177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:46:49.416816   10177 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1216 03:46:49.416824   10177 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1216 03:46:49.416858   10177 start.go:340] cluster config:
	{Name:custom-flannel-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:46:49.421341   10177 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:46:49.429646   10177 out.go:177] * Starting "custom-flannel-989000" primary control-plane node in "custom-flannel-989000" cluster
	I1216 03:46:49.433707   10177 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:46:49.433724   10177 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:46:49.433735   10177 cache.go:56] Caching tarball of preloaded images
	I1216 03:46:49.433829   10177 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:46:49.433834   10177 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:46:49.433887   10177 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/custom-flannel-989000/config.json ...
	I1216 03:46:49.433904   10177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/custom-flannel-989000/config.json: {Name:mkf1fdc25fb7ddf58a428345a336509ba1638724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:46:49.434372   10177 start.go:360] acquireMachinesLock for custom-flannel-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:46:49.434423   10177 start.go:364] duration metric: took 41.666µs to acquireMachinesLock for "custom-flannel-989000"
	I1216 03:46:49.434434   10177 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:46:49.434466   10177 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:46:49.439663   10177 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:46:49.457099   10177 start.go:159] libmachine.API.Create for "custom-flannel-989000" (driver="qemu2")
	I1216 03:46:49.457136   10177 client.go:168] LocalClient.Create starting
	I1216 03:46:49.457207   10177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:46:49.457243   10177 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:49.457257   10177 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:49.457297   10177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:46:49.457328   10177 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:49.457340   10177 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:49.457690   10177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:46:49.631032   10177 main.go:141] libmachine: Creating SSH key...
	I1216 03:46:49.689761   10177 main.go:141] libmachine: Creating Disk image...
	I1216 03:46:49.689767   10177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:46:49.690002   10177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2
	I1216 03:46:49.699795   10177 main.go:141] libmachine: STDOUT: 
	I1216 03:46:49.699816   10177 main.go:141] libmachine: STDERR: 
	I1216 03:46:49.699869   10177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2 +20000M
	I1216 03:46:49.708330   10177 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:46:49.708345   10177 main.go:141] libmachine: STDERR: 
	I1216 03:46:49.708367   10177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2
	I1216 03:46:49.708374   10177 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:46:49.708385   10177 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:46:49.708416   10177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a5:3d:59:97:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2
	I1216 03:46:49.710185   10177 main.go:141] libmachine: STDOUT: 
	I1216 03:46:49.710199   10177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:46:49.710219   10177 client.go:171] duration metric: took 253.082ms to LocalClient.Create
	I1216 03:46:51.712353   10177 start.go:128] duration metric: took 2.277902792s to createHost
	I1216 03:46:51.712477   10177 start.go:83] releasing machines lock for "custom-flannel-989000", held for 2.278030709s
	W1216 03:46:51.712534   10177 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:51.728126   10177 out.go:177] * Deleting "custom-flannel-989000" in qemu2 ...
	W1216 03:46:51.761265   10177 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:51.761297   10177 start.go:729] Will try again in 5 seconds ...
	I1216 03:46:56.763418   10177 start.go:360] acquireMachinesLock for custom-flannel-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:46:56.764037   10177 start.go:364] duration metric: took 510.542µs to acquireMachinesLock for "custom-flannel-989000"
	I1216 03:46:56.764171   10177 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:46:56.764410   10177 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:46:56.768149   10177 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:46:56.815968   10177 start.go:159] libmachine.API.Create for "custom-flannel-989000" (driver="qemu2")
	I1216 03:46:56.816021   10177 client.go:168] LocalClient.Create starting
	I1216 03:46:56.816153   10177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:46:56.816232   10177 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:56.816249   10177 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:56.816317   10177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:46:56.816379   10177 main.go:141] libmachine: Decoding PEM data...
	I1216 03:46:56.816404   10177 main.go:141] libmachine: Parsing certificate...
	I1216 03:46:56.818701   10177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:46:57.007374   10177 main.go:141] libmachine: Creating SSH key...
	I1216 03:46:57.091992   10177 main.go:141] libmachine: Creating Disk image...
	I1216 03:46:57.091998   10177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:46:57.092229   10177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2
	I1216 03:46:57.101960   10177 main.go:141] libmachine: STDOUT: 
	I1216 03:46:57.101981   10177 main.go:141] libmachine: STDERR: 
	I1216 03:46:57.102050   10177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2 +20000M
	I1216 03:46:57.110561   10177 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:46:57.110582   10177 main.go:141] libmachine: STDERR: 
	I1216 03:46:57.110593   10177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2
	I1216 03:46:57.110598   10177 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:46:57.110614   10177 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:46:57.110652   10177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:48:78:ab:ba:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/custom-flannel-989000/disk.qcow2
	I1216 03:46:57.112355   10177 main.go:141] libmachine: STDOUT: 
	I1216 03:46:57.112378   10177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:46:57.112391   10177 client.go:171] duration metric: took 296.367583ms to LocalClient.Create
	I1216 03:46:59.114585   10177 start.go:128] duration metric: took 2.350177291s to createHost
	I1216 03:46:59.114676   10177 start.go:83] releasing machines lock for "custom-flannel-989000", held for 2.350653167s
	W1216 03:46:59.115252   10177 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:46:59.129843   10177 out.go:201] 
	W1216 03:46:59.135133   10177 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:46:59.135159   10177 out.go:270] * 
	* 
	W1216 03:46:59.137635   10177 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:46:59.148008   10177 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.894014083s)

                                                
                                                
-- stdout --
	* [false-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-989000" primary control-plane node in "false-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:47:01.722542   10296 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:47:01.722717   10296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:01.722720   10296 out.go:358] Setting ErrFile to fd 2...
	I1216 03:47:01.722723   10296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:01.722855   10296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:47:01.724009   10296 out.go:352] Setting JSON to false
	I1216 03:47:01.741811   10296 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6392,"bootTime":1734343229,"procs":568,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:47:01.741882   10296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:47:01.747600   10296 out.go:177] * [false-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:47:01.755487   10296 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:47:01.755547   10296 notify.go:220] Checking for updates...
	I1216 03:47:01.764489   10296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:47:01.768475   10296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:47:01.772515   10296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:47:01.787465   10296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:47:01.790454   10296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:47:01.793778   10296 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:01.793877   10296 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:01.793927   10296 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:47:01.797452   10296 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:47:01.804476   10296 start.go:297] selected driver: qemu2
	I1216 03:47:01.804484   10296 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:47:01.804489   10296 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:47:01.807224   10296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:47:01.810461   10296 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:47:01.814548   10296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:47:01.814573   10296 cni.go:84] Creating CNI manager for "false"
	I1216 03:47:01.814631   10296 start.go:340] cluster config:
	{Name:false-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:47:01.819802   10296 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:47:01.828494   10296 out.go:177] * Starting "false-989000" primary control-plane node in "false-989000" cluster
	I1216 03:47:01.832504   10296 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:47:01.832522   10296 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:47:01.832536   10296 cache.go:56] Caching tarball of preloaded images
	I1216 03:47:01.832634   10296 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:47:01.832640   10296 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:47:01.832712   10296 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/false-989000/config.json ...
	I1216 03:47:01.832724   10296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/false-989000/config.json: {Name:mkcc079bce44435f9d07e4003e2b701d037bf689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:47:01.833071   10296 start.go:360] acquireMachinesLock for false-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:01.833123   10296 start.go:364] duration metric: took 45.75µs to acquireMachinesLock for "false-989000"
	I1216 03:47:01.833135   10296 start.go:93] Provisioning new machine with config: &{Name:false-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:01.833166   10296 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:01.837482   10296 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:01.855645   10296 start.go:159] libmachine.API.Create for "false-989000" (driver="qemu2")
	I1216 03:47:01.855670   10296 client.go:168] LocalClient.Create starting
	I1216 03:47:01.855750   10296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:01.855790   10296 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:01.855800   10296 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:01.855846   10296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:01.855879   10296 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:01.855889   10296 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:01.856365   10296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:02.030557   10296 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:02.062332   10296 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:02.062337   10296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:02.062576   10296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2
	I1216 03:47:02.072395   10296 main.go:141] libmachine: STDOUT: 
	I1216 03:47:02.072411   10296 main.go:141] libmachine: STDERR: 
	I1216 03:47:02.072461   10296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2 +20000M
	I1216 03:47:02.080877   10296 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:02.080893   10296 main.go:141] libmachine: STDERR: 
	I1216 03:47:02.080908   10296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2
	I1216 03:47:02.080913   10296 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:02.080926   10296 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:02.080956   10296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cd:5e:0e:9d:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2
	I1216 03:47:02.082762   10296 main.go:141] libmachine: STDOUT: 
	I1216 03:47:02.082777   10296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:02.082799   10296 client.go:171] duration metric: took 227.126792ms to LocalClient.Create
	I1216 03:47:04.084933   10296 start.go:128] duration metric: took 2.251787792s to createHost
	I1216 03:47:04.084994   10296 start.go:83] releasing machines lock for "false-989000", held for 2.251903625s
	W1216 03:47:04.085092   10296 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:04.091410   10296 out.go:177] * Deleting "false-989000" in qemu2 ...
	W1216 03:47:04.134508   10296 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:04.134544   10296 start.go:729] Will try again in 5 seconds ...
	I1216 03:47:09.136676   10296 start.go:360] acquireMachinesLock for false-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:09.137310   10296 start.go:364] duration metric: took 510.209µs to acquireMachinesLock for "false-989000"
	I1216 03:47:09.137442   10296 start.go:93] Provisioning new machine with config: &{Name:false-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:09.137798   10296 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:09.143596   10296 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:09.192194   10296 start.go:159] libmachine.API.Create for "false-989000" (driver="qemu2")
	I1216 03:47:09.192244   10296 client.go:168] LocalClient.Create starting
	I1216 03:47:09.192399   10296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:09.192482   10296 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:09.192498   10296 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:09.192565   10296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:09.192627   10296 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:09.192644   10296 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:09.195663   10296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:09.386787   10296 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:09.512936   10296 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:09.512942   10296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:09.513224   10296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2
	I1216 03:47:09.523574   10296 main.go:141] libmachine: STDOUT: 
	I1216 03:47:09.523594   10296 main.go:141] libmachine: STDERR: 
	I1216 03:47:09.523657   10296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2 +20000M
	I1216 03:47:09.532134   10296 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:09.532150   10296 main.go:141] libmachine: STDERR: 
	I1216 03:47:09.532160   10296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2
	I1216 03:47:09.532164   10296 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:09.532173   10296 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:09.532204   10296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4a:32:46:24:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/false-989000/disk.qcow2
	I1216 03:47:09.534026   10296 main.go:141] libmachine: STDOUT: 
	I1216 03:47:09.534041   10296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:09.534054   10296 client.go:171] duration metric: took 341.809583ms to LocalClient.Create
	I1216 03:47:11.536198   10296 start.go:128] duration metric: took 2.398415375s to createHost
	I1216 03:47:11.536242   10296 start.go:83] releasing machines lock for "false-989000", held for 2.398952375s
	W1216 03:47:11.536627   10296 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:11.551370   10296 out.go:201] 
	W1216 03:47:11.555412   10296 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:47:11.555450   10296 out.go:270] * 
	* 
	W1216 03:47:11.558361   10296 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:47:11.570294   10296 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.926426625s)

                                                
                                                
-- stdout --
	* [enable-default-cni-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-989000" primary control-plane node in "enable-default-cni-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:47:13.912892   10408 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:47:13.913040   10408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:13.913044   10408 out.go:358] Setting ErrFile to fd 2...
	I1216 03:47:13.913047   10408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:13.913188   10408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:47:13.914598   10408 out.go:352] Setting JSON to false
	I1216 03:47:13.932662   10408 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6404,"bootTime":1734343229,"procs":569,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:47:13.932741   10408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:47:13.938195   10408 out.go:177] * [enable-default-cni-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:47:13.945108   10408 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:47:13.945166   10408 notify.go:220] Checking for updates...
	I1216 03:47:13.953056   10408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:47:13.957066   10408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:47:13.961127   10408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:47:13.964044   10408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:47:13.967033   10408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:47:13.970433   10408 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:13.970511   10408 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:13.970563   10408 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:47:13.975045   10408 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:47:13.982061   10408 start.go:297] selected driver: qemu2
	I1216 03:47:13.982067   10408 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:47:13.982074   10408 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:47:13.984660   10408 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:47:13.988189   10408 out.go:177] * Automatically selected the socket_vmnet network
	E1216 03:47:13.991106   10408 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1216 03:47:13.991117   10408 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:47:13.991131   10408 cni.go:84] Creating CNI manager for "bridge"
	I1216 03:47:13.991135   10408 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:47:13.991174   10408 start.go:340] cluster config:
	{Name:enable-default-cni-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:47:13.995996   10408 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:47:14.004056   10408 out.go:177] * Starting "enable-default-cni-989000" primary control-plane node in "enable-default-cni-989000" cluster
	I1216 03:47:14.008077   10408 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:47:14.008094   10408 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:47:14.008106   10408 cache.go:56] Caching tarball of preloaded images
	I1216 03:47:14.008201   10408 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:47:14.008207   10408 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:47:14.008277   10408 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/enable-default-cni-989000/config.json ...
	I1216 03:47:14.008293   10408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/enable-default-cni-989000/config.json: {Name:mk299da8601d6d0b3d4dbea9392c1dc31c1ca18d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:47:14.008650   10408 start.go:360] acquireMachinesLock for enable-default-cni-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:14.008703   10408 start.go:364] duration metric: took 45.208µs to acquireMachinesLock for "enable-default-cni-989000"
	I1216 03:47:14.008715   10408 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:14.008748   10408 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:14.013073   10408 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:14.031380   10408 start.go:159] libmachine.API.Create for "enable-default-cni-989000" (driver="qemu2")
	I1216 03:47:14.031411   10408 client.go:168] LocalClient.Create starting
	I1216 03:47:14.031485   10408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:14.031527   10408 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:14.031540   10408 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:14.031579   10408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:14.031609   10408 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:14.031617   10408 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:14.032099   10408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:14.204704   10408 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:14.279740   10408 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:14.279746   10408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:14.279965   10408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2
	I1216 03:47:14.289741   10408 main.go:141] libmachine: STDOUT: 
	I1216 03:47:14.289769   10408 main.go:141] libmachine: STDERR: 
	I1216 03:47:14.289836   10408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2 +20000M
	I1216 03:47:14.298229   10408 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:14.298245   10408 main.go:141] libmachine: STDERR: 
	I1216 03:47:14.298261   10408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2
	I1216 03:47:14.298266   10408 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:14.298281   10408 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:14.298312   10408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:bd:ff:3b:31:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2
	I1216 03:47:14.300085   10408 main.go:141] libmachine: STDOUT: 
	I1216 03:47:14.300108   10408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:14.300136   10408 client.go:171] duration metric: took 268.723959ms to LocalClient.Create
	I1216 03:47:16.302276   10408 start.go:128] duration metric: took 2.293549167s to createHost
	I1216 03:47:16.302328   10408 start.go:83] releasing machines lock for "enable-default-cni-989000", held for 2.293657667s
	W1216 03:47:16.302436   10408 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:16.314869   10408 out.go:177] * Deleting "enable-default-cni-989000" in qemu2 ...
	W1216 03:47:16.349209   10408 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:16.349266   10408 start.go:729] Will try again in 5 seconds ...
	I1216 03:47:21.351360   10408 start.go:360] acquireMachinesLock for enable-default-cni-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:21.351871   10408 start.go:364] duration metric: took 421.417µs to acquireMachinesLock for "enable-default-cni-989000"
	I1216 03:47:21.351987   10408 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:21.352220   10408 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:21.369723   10408 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:21.421180   10408 start.go:159] libmachine.API.Create for "enable-default-cni-989000" (driver="qemu2")
	I1216 03:47:21.421239   10408 client.go:168] LocalClient.Create starting
	I1216 03:47:21.421366   10408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:21.421453   10408 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:21.421468   10408 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:21.421531   10408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:21.421586   10408 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:21.421602   10408 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:21.422365   10408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:21.604022   10408 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:21.731341   10408 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:21.731350   10408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:21.731609   10408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2
	I1216 03:47:21.742156   10408 main.go:141] libmachine: STDOUT: 
	I1216 03:47:21.742178   10408 main.go:141] libmachine: STDERR: 
	I1216 03:47:21.742253   10408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2 +20000M
	I1216 03:47:21.750999   10408 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:21.751016   10408 main.go:141] libmachine: STDERR: 
	I1216 03:47:21.751034   10408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2
	I1216 03:47:21.751039   10408 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:21.751047   10408 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:21.751079   10408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:73:c0:10:da:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/enable-default-cni-989000/disk.qcow2
	I1216 03:47:21.752852   10408 main.go:141] libmachine: STDOUT: 
	I1216 03:47:21.752868   10408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:21.752882   10408 client.go:171] duration metric: took 331.6425ms to LocalClient.Create
	I1216 03:47:23.755066   10408 start.go:128] duration metric: took 2.402845s to createHost
	I1216 03:47:23.755144   10408 start.go:83] releasing machines lock for "enable-default-cni-989000", held for 2.403293417s
	W1216 03:47:23.755517   10408 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:23.772528   10408 out.go:201] 
	W1216 03:47:23.777385   10408 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:47:23.777418   10408 out.go:270] * 
	* 
	W1216 03:47:23.780305   10408 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:47:23.792274   10408 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.032810625s)

                                                
                                                
-- stdout --
	* [flannel-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-989000" primary control-plane node in "flannel-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:47:26.100724   10520 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:47:26.100868   10520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:26.100871   10520 out.go:358] Setting ErrFile to fd 2...
	I1216 03:47:26.100873   10520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:26.101000   10520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:47:26.102172   10520 out.go:352] Setting JSON to false
	I1216 03:47:26.120524   10520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6417,"bootTime":1734343229,"procs":569,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:47:26.120593   10520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:47:26.126534   10520 out.go:177] * [flannel-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:47:26.134312   10520 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:47:26.134359   10520 notify.go:220] Checking for updates...
	I1216 03:47:26.142488   10520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:47:26.146444   10520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:47:26.150450   10520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:47:26.153469   10520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:47:26.156420   10520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:47:26.159726   10520 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:26.159803   10520 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:26.159852   10520 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:47:26.163450   10520 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:47:26.170494   10520 start.go:297] selected driver: qemu2
	I1216 03:47:26.170502   10520 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:47:26.170508   10520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:47:26.173032   10520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:47:26.177440   10520 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:47:26.180467   10520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:47:26.180487   10520 cni.go:84] Creating CNI manager for "flannel"
	I1216 03:47:26.180497   10520 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1216 03:47:26.180535   10520 start.go:340] cluster config:
	{Name:flannel-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:47:26.185172   10520 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:47:26.192345   10520 out.go:177] * Starting "flannel-989000" primary control-plane node in "flannel-989000" cluster
	I1216 03:47:26.196420   10520 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:47:26.196437   10520 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:47:26.196448   10520 cache.go:56] Caching tarball of preloaded images
	I1216 03:47:26.196552   10520 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:47:26.196557   10520 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:47:26.196623   10520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/flannel-989000/config.json ...
	I1216 03:47:26.196634   10520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/flannel-989000/config.json: {Name:mk93b03c6bbb1d3292de3dfc94c3ac74b631e730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:47:26.197097   10520 start.go:360] acquireMachinesLock for flannel-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:26.197144   10520 start.go:364] duration metric: took 41.625µs to acquireMachinesLock for "flannel-989000"
	I1216 03:47:26.197155   10520 start.go:93] Provisioning new machine with config: &{Name:flannel-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:26.197186   10520 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:26.201448   10520 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:26.218461   10520 start.go:159] libmachine.API.Create for "flannel-989000" (driver="qemu2")
	I1216 03:47:26.218487   10520 client.go:168] LocalClient.Create starting
	I1216 03:47:26.218556   10520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:26.218593   10520 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:26.218605   10520 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:26.218641   10520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:26.218669   10520 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:26.218677   10520 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:26.219164   10520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:26.392262   10520 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:26.596751   10520 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:26.596763   10520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:26.597017   10520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2
	I1216 03:47:26.607198   10520 main.go:141] libmachine: STDOUT: 
	I1216 03:47:26.607229   10520 main.go:141] libmachine: STDERR: 
	I1216 03:47:26.607297   10520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2 +20000M
	I1216 03:47:26.615773   10520 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:26.615793   10520 main.go:141] libmachine: STDERR: 
	I1216 03:47:26.615811   10520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2
	I1216 03:47:26.615816   10520 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:26.615830   10520 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:26.615866   10520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:61:5d:b8:40:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2
	I1216 03:47:26.617645   10520 main.go:141] libmachine: STDOUT: 
	I1216 03:47:26.617666   10520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:26.617686   10520 client.go:171] duration metric: took 399.201875ms to LocalClient.Create
	I1216 03:47:28.619826   10520 start.go:128] duration metric: took 2.42266325s to createHost
	I1216 03:47:28.619900   10520 start.go:83] releasing machines lock for "flannel-989000", held for 2.422784292s
	W1216 03:47:28.619952   10520 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:28.633415   10520 out.go:177] * Deleting "flannel-989000" in qemu2 ...
	W1216 03:47:28.663008   10520 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:28.663027   10520 start.go:729] Will try again in 5 seconds ...
	I1216 03:47:33.665285   10520 start.go:360] acquireMachinesLock for flannel-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:33.665864   10520 start.go:364] duration metric: took 454.375µs to acquireMachinesLock for "flannel-989000"
	I1216 03:47:33.666014   10520 start.go:93] Provisioning new machine with config: &{Name:flannel-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:33.666319   10520 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:33.682939   10520 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:33.733051   10520 start.go:159] libmachine.API.Create for "flannel-989000" (driver="qemu2")
	I1216 03:47:33.733097   10520 client.go:168] LocalClient.Create starting
	I1216 03:47:33.733227   10520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:33.733307   10520 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:33.733326   10520 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:33.733405   10520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:33.733474   10520 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:33.733490   10520 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:33.734549   10520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:33.917093   10520 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:34.026066   10520 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:34.026072   10520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:34.026313   10520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2
	I1216 03:47:34.036488   10520 main.go:141] libmachine: STDOUT: 
	I1216 03:47:34.036507   10520 main.go:141] libmachine: STDERR: 
	I1216 03:47:34.036560   10520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2 +20000M
	I1216 03:47:34.045003   10520 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:34.045031   10520 main.go:141] libmachine: STDERR: 
	I1216 03:47:34.045044   10520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2
	I1216 03:47:34.045049   10520 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:34.045057   10520 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:34.045090   10520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:54:4d:e2:28:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/flannel-989000/disk.qcow2
	I1216 03:47:34.046944   10520 main.go:141] libmachine: STDOUT: 
	I1216 03:47:34.046961   10520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:34.046973   10520 client.go:171] duration metric: took 313.877208ms to LocalClient.Create
	I1216 03:47:36.049112   10520 start.go:128] duration metric: took 2.382807708s to createHost
	I1216 03:47:36.049225   10520 start.go:83] releasing machines lock for "flannel-989000", held for 2.383365917s
	W1216 03:47:36.049678   10520 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:36.068397   10520 out.go:201] 
	W1216 03:47:36.072391   10520 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:47:36.072428   10520 out.go:270] * 
	* 
	W1216 03:47:36.084553   10520 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:47:36.091438   10520 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.04704125s)

                                                
                                                
-- stdout --
	* [bridge-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-989000" primary control-plane node in "bridge-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:47:38.575684   10640 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:47:38.575839   10640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:38.575842   10640 out.go:358] Setting ErrFile to fd 2...
	I1216 03:47:38.575845   10640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:38.575978   10640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:47:38.577146   10640 out.go:352] Setting JSON to false
	I1216 03:47:38.595267   10640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6429,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:47:38.595335   10640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:47:38.602080   10640 out.go:177] * [bridge-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:47:38.610829   10640 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:47:38.610891   10640 notify.go:220] Checking for updates...
	I1216 03:47:38.618747   10640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:47:38.622834   10640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:47:38.626857   10640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:47:38.629858   10640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:47:38.632828   10640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:47:38.636178   10640 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:38.636275   10640 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:38.636331   10640 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:47:38.640819   10640 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:47:38.647856   10640 start.go:297] selected driver: qemu2
	I1216 03:47:38.647862   10640 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:47:38.647872   10640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:47:38.650414   10640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:47:38.653808   10640 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:47:38.656875   10640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:47:38.656890   10640 cni.go:84] Creating CNI manager for "bridge"
	I1216 03:47:38.656894   10640 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:47:38.656924   10640 start.go:340] cluster config:
	{Name:bridge-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:47:38.661795   10640 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:47:38.669799   10640 out.go:177] * Starting "bridge-989000" primary control-plane node in "bridge-989000" cluster
	I1216 03:47:38.673876   10640 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:47:38.673894   10640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:47:38.673906   10640 cache.go:56] Caching tarball of preloaded images
	I1216 03:47:38.673996   10640 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:47:38.674002   10640 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:47:38.674068   10640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/bridge-989000/config.json ...
	I1216 03:47:38.674094   10640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/bridge-989000/config.json: {Name:mka0ed0fea1ac8f143e90b183e1be3aafbad442b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:47:38.674557   10640 start.go:360] acquireMachinesLock for bridge-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:38.674608   10640 start.go:364] duration metric: took 45.083µs to acquireMachinesLock for "bridge-989000"
	I1216 03:47:38.674620   10640 start.go:93] Provisioning new machine with config: &{Name:bridge-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:38.674646   10640 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:38.678838   10640 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:38.697110   10640 start.go:159] libmachine.API.Create for "bridge-989000" (driver="qemu2")
	I1216 03:47:38.697140   10640 client.go:168] LocalClient.Create starting
	I1216 03:47:38.697228   10640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:38.697268   10640 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:38.697276   10640 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:38.697318   10640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:38.697348   10640 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:38.697359   10640 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:38.697750   10640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:38.870175   10640 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:39.081779   10640 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:39.081788   10640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:39.082083   10640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2
	I1216 03:47:39.092769   10640 main.go:141] libmachine: STDOUT: 
	I1216 03:47:39.092787   10640 main.go:141] libmachine: STDERR: 
	I1216 03:47:39.092844   10640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2 +20000M
	I1216 03:47:39.101489   10640 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:39.101511   10640 main.go:141] libmachine: STDERR: 
	I1216 03:47:39.101526   10640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2
	I1216 03:47:39.101532   10640 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:39.101547   10640 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:39.101579   10640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:44:aa:8c:cf:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2
	I1216 03:47:39.103457   10640 main.go:141] libmachine: STDOUT: 
	I1216 03:47:39.103471   10640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:39.103490   10640 client.go:171] duration metric: took 406.351041ms to LocalClient.Create
	I1216 03:47:41.105624   10640 start.go:128] duration metric: took 2.431004s to createHost
	I1216 03:47:41.105685   10640 start.go:83] releasing machines lock for "bridge-989000", held for 2.431111959s
	W1216 03:47:41.105797   10640 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:41.118980   10640 out.go:177] * Deleting "bridge-989000" in qemu2 ...
	W1216 03:47:41.148395   10640 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:41.148415   10640 start.go:729] Will try again in 5 seconds ...
	I1216 03:47:46.150525   10640 start.go:360] acquireMachinesLock for bridge-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:46.151051   10640 start.go:364] duration metric: took 435.833µs to acquireMachinesLock for "bridge-989000"
	I1216 03:47:46.151161   10640 start.go:93] Provisioning new machine with config: &{Name:bridge-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:46.151474   10640 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:46.168984   10640 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:46.217628   10640 start.go:159] libmachine.API.Create for "bridge-989000" (driver="qemu2")
	I1216 03:47:46.217684   10640 client.go:168] LocalClient.Create starting
	I1216 03:47:46.217826   10640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:46.217904   10640 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:46.217918   10640 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:46.217988   10640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:46.218046   10640 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:46.218061   10640 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:46.218784   10640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:46.402332   10640 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:46.521820   10640 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:46.521826   10640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:46.522057   10640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2
	I1216 03:47:46.532440   10640 main.go:141] libmachine: STDOUT: 
	I1216 03:47:46.532457   10640 main.go:141] libmachine: STDERR: 
	I1216 03:47:46.532519   10640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2 +20000M
	I1216 03:47:46.541014   10640 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:46.541029   10640 main.go:141] libmachine: STDERR: 
	I1216 03:47:46.541045   10640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2
	I1216 03:47:46.541051   10640 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:46.541061   10640 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:46.541095   10640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:9b:16:81:e5:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/bridge-989000/disk.qcow2
	I1216 03:47:46.542902   10640 main.go:141] libmachine: STDOUT: 
	I1216 03:47:46.542916   10640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:46.542930   10640 client.go:171] duration metric: took 325.246333ms to LocalClient.Create
	I1216 03:47:48.545160   10640 start.go:128] duration metric: took 2.393701875s to createHost
	I1216 03:47:48.545205   10640 start.go:83] releasing machines lock for "bridge-989000", held for 2.394174917s
	W1216 03:47:48.545488   10640 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:48.558110   10640 out.go:201] 
	W1216 03:47:48.562265   10640 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:47:48.562295   10640 out.go:270] * 
	* 
	W1216 03:47:48.564781   10640 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:47:48.577165   10640 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-989000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.906534334s)

                                                
                                                
-- stdout --
	* [kubenet-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-989000" primary control-plane node in "kubenet-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:47:50.961054   10749 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:47:50.961204   10749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:50.961207   10749 out.go:358] Setting ErrFile to fd 2...
	I1216 03:47:50.961210   10749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:47:50.961337   10749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:47:50.962508   10749 out.go:352] Setting JSON to false
	I1216 03:47:50.980432   10749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6441,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:47:50.980505   10749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:47:50.987474   10749 out.go:177] * [kubenet-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:47:50.994392   10749 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:47:50.994447   10749 notify.go:220] Checking for updates...
	I1216 03:47:51.002323   10749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:47:51.006404   10749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:47:51.009383   10749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:47:51.012380   10749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:47:51.015397   10749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:47:51.018699   10749 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:51.018790   10749 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:47:51.018848   10749 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:47:51.023360   10749 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:47:51.030364   10749 start.go:297] selected driver: qemu2
	I1216 03:47:51.030369   10749 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:47:51.030375   10749 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:47:51.033042   10749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:47:51.036350   10749 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:47:51.039429   10749 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:47:51.039448   10749 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 03:47:51.039481   10749 start.go:340] cluster config:
	{Name:kubenet-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:47:51.044241   10749 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:47:51.051373   10749 out.go:177] * Starting "kubenet-989000" primary control-plane node in "kubenet-989000" cluster
	I1216 03:47:51.055400   10749 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:47:51.055416   10749 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:47:51.055432   10749 cache.go:56] Caching tarball of preloaded images
	I1216 03:47:51.055508   10749 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:47:51.055514   10749 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:47:51.055579   10749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/kubenet-989000/config.json ...
	I1216 03:47:51.055590   10749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/kubenet-989000/config.json: {Name:mka91c7bac41fe376e4c86c57b113278ecf4bbfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:47:51.056048   10749 start.go:360] acquireMachinesLock for kubenet-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:51.056095   10749 start.go:364] duration metric: took 41.416µs to acquireMachinesLock for "kubenet-989000"
	I1216 03:47:51.056106   10749 start.go:93] Provisioning new machine with config: &{Name:kubenet-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:51.056133   10749 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:51.061372   10749 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:51.078244   10749 start.go:159] libmachine.API.Create for "kubenet-989000" (driver="qemu2")
	I1216 03:47:51.078267   10749 client.go:168] LocalClient.Create starting
	I1216 03:47:51.078333   10749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:51.078378   10749 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:51.078386   10749 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:51.078423   10749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:51.078451   10749 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:51.078459   10749 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:51.078953   10749 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:51.250418   10749 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:51.402098   10749 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:51.402105   10749 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:51.402370   10749 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2
	I1216 03:47:51.412949   10749 main.go:141] libmachine: STDOUT: 
	I1216 03:47:51.412976   10749 main.go:141] libmachine: STDERR: 
	I1216 03:47:51.413030   10749 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2 +20000M
	I1216 03:47:51.421593   10749 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:51.421616   10749 main.go:141] libmachine: STDERR: 
	I1216 03:47:51.421636   10749 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2
	I1216 03:47:51.421642   10749 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:51.421653   10749 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:51.421684   10749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:05:08:bb:af:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2
	I1216 03:47:51.423499   10749 main.go:141] libmachine: STDOUT: 
	I1216 03:47:51.423511   10749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:51.423531   10749 client.go:171] duration metric: took 345.264959ms to LocalClient.Create
	I1216 03:47:53.425674   10749 start.go:128] duration metric: took 2.369562292s to createHost
	I1216 03:47:53.425748   10749 start.go:83] releasing machines lock for "kubenet-989000", held for 2.369687125s
	W1216 03:47:53.425794   10749 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:53.442959   10749 out.go:177] * Deleting "kubenet-989000" in qemu2 ...
	W1216 03:47:53.474015   10749 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:47:53.474049   10749 start.go:729] Will try again in 5 seconds ...
	I1216 03:47:58.476117   10749 start.go:360] acquireMachinesLock for kubenet-989000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:47:58.476633   10749 start.go:364] duration metric: took 442.5µs to acquireMachinesLock for "kubenet-989000"
	I1216 03:47:58.476745   10749 start.go:93] Provisioning new machine with config: &{Name:kubenet-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:47:58.477047   10749 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:47:58.482602   10749 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:47:58.534400   10749 start.go:159] libmachine.API.Create for "kubenet-989000" (driver="qemu2")
	I1216 03:47:58.534440   10749 client.go:168] LocalClient.Create starting
	I1216 03:47:58.534574   10749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:47:58.534672   10749 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:58.534692   10749 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:58.534766   10749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:47:58.534825   10749 main.go:141] libmachine: Decoding PEM data...
	I1216 03:47:58.534837   10749 main.go:141] libmachine: Parsing certificate...
	I1216 03:47:58.535484   10749 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:47:58.729384   10749 main.go:141] libmachine: Creating SSH key...
	I1216 03:47:58.759321   10749 main.go:141] libmachine: Creating Disk image...
	I1216 03:47:58.759328   10749 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:47:58.759551   10749 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2
	I1216 03:47:58.769376   10749 main.go:141] libmachine: STDOUT: 
	I1216 03:47:58.769399   10749 main.go:141] libmachine: STDERR: 
	I1216 03:47:58.769462   10749 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2 +20000M
	I1216 03:47:58.777968   10749 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:47:58.777986   10749 main.go:141] libmachine: STDERR: 
	I1216 03:47:58.778006   10749 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2
	I1216 03:47:58.778018   10749 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:47:58.778026   10749 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:47:58.778058   10749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:1a:5a:4a:34:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/kubenet-989000/disk.qcow2
	I1216 03:47:58.779911   10749 main.go:141] libmachine: STDOUT: 
	I1216 03:47:58.779931   10749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:47:58.779944   10749 client.go:171] duration metric: took 245.50225ms to LocalClient.Create
	I1216 03:48:00.782094   10749 start.go:128] duration metric: took 2.305054875s to createHost
	I1216 03:48:00.782188   10749 start.go:83] releasing machines lock for "kubenet-989000", held for 2.305573959s
	W1216 03:48:00.782571   10749 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:00.800325   10749 out.go:201] 
	W1216 03:48:00.805423   10749 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:00.805467   10749 out.go:270] * 
	* 
	W1216 03:48:00.808084   10749 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:48:00.820326   10749 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.925367292s)

                                                
                                                
-- stdout --
	* [old-k8s-version-424000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-424000" primary control-plane node in "old-k8s-version-424000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-424000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:03.188224   10858 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:03.188381   10858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:03.188384   10858 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:03.188387   10858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:03.188512   10858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:03.189681   10858 out.go:352] Setting JSON to false
	I1216 03:48:03.207624   10858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6454,"bootTime":1734343229,"procs":567,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:48:03.207699   10858 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:48:03.213640   10858 out.go:177] * [old-k8s-version-424000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:48:03.221520   10858 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:48:03.221566   10858 notify.go:220] Checking for updates...
	I1216 03:48:03.229458   10858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:48:03.232514   10858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:48:03.235447   10858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:48:03.238486   10858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:48:03.241501   10858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:48:03.243266   10858 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:03.243339   10858 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:03.243397   10858 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:48:03.246417   10858 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:48:03.253356   10858 start.go:297] selected driver: qemu2
	I1216 03:48:03.253366   10858 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:48:03.253376   10858 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:48:03.256024   10858 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:48:03.260467   10858 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:48:03.264585   10858 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:48:03.264607   10858 cni.go:84] Creating CNI manager for ""
	I1216 03:48:03.264629   10858 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 03:48:03.264682   10858 start.go:340] cluster config:
	{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:03.269393   10858 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:03.277504   10858 out.go:177] * Starting "old-k8s-version-424000" primary control-plane node in "old-k8s-version-424000" cluster
	I1216 03:48:03.281313   10858 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:48:03.281330   10858 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 03:48:03.281341   10858 cache.go:56] Caching tarball of preloaded images
	I1216 03:48:03.281421   10858 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:48:03.281434   10858 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 03:48:03.281493   10858 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/old-k8s-version-424000/config.json ...
	I1216 03:48:03.281509   10858 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/old-k8s-version-424000/config.json: {Name:mkafcd5d1bf02fca025d5484a61c0f5d7c07ebc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:48:03.281987   10858 start.go:360] acquireMachinesLock for old-k8s-version-424000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:03.282039   10858 start.go:364] duration metric: took 45.333µs to acquireMachinesLock for "old-k8s-version-424000"
	I1216 03:48:03.282050   10858 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:48:03.282077   10858 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:48:03.286544   10858 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:48:03.304032   10858 start.go:159] libmachine.API.Create for "old-k8s-version-424000" (driver="qemu2")
	I1216 03:48:03.304059   10858 client.go:168] LocalClient.Create starting
	I1216 03:48:03.304123   10858 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:48:03.304165   10858 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:03.304178   10858 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:03.304214   10858 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:48:03.304243   10858 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:03.304251   10858 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:03.304739   10858 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:48:03.475842   10858 main.go:141] libmachine: Creating SSH key...
	I1216 03:48:03.521974   10858 main.go:141] libmachine: Creating Disk image...
	I1216 03:48:03.521979   10858 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:48:03.522198   10858 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I1216 03:48:03.532057   10858 main.go:141] libmachine: STDOUT: 
	I1216 03:48:03.532083   10858 main.go:141] libmachine: STDERR: 
	I1216 03:48:03.532141   10858 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2 +20000M
	I1216 03:48:03.540524   10858 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:48:03.540539   10858 main.go:141] libmachine: STDERR: 
	I1216 03:48:03.540559   10858 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I1216 03:48:03.540563   10858 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:48:03.540575   10858 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:03.540600   10858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:60:82:14:a4:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I1216 03:48:03.542401   10858 main.go:141] libmachine: STDOUT: 
	I1216 03:48:03.542415   10858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:03.542434   10858 client.go:171] duration metric: took 238.374375ms to LocalClient.Create
	I1216 03:48:05.544572   10858 start.go:128] duration metric: took 2.262515833s to createHost
	I1216 03:48:05.544637   10858 start.go:83] releasing machines lock for "old-k8s-version-424000", held for 2.262630708s
	W1216 03:48:05.544730   10858 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:05.557946   10858 out.go:177] * Deleting "old-k8s-version-424000" in qemu2 ...
	W1216 03:48:05.587056   10858 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:05.587078   10858 start.go:729] Will try again in 5 seconds ...
	I1216 03:48:10.589268   10858 start.go:360] acquireMachinesLock for old-k8s-version-424000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:10.589779   10858 start.go:364] duration metric: took 422.208µs to acquireMachinesLock for "old-k8s-version-424000"
	I1216 03:48:10.589895   10858 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:48:10.590151   10858 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:48:10.595705   10858 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:48:10.643943   10858 start.go:159] libmachine.API.Create for "old-k8s-version-424000" (driver="qemu2")
	I1216 03:48:10.643991   10858 client.go:168] LocalClient.Create starting
	I1216 03:48:10.644130   10858 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:48:10.644204   10858 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:10.644221   10858 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:10.644278   10858 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:48:10.644335   10858 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:10.644347   10858 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:10.647290   10858 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:48:10.836509   10858 main.go:141] libmachine: Creating SSH key...
	I1216 03:48:11.010114   10858 main.go:141] libmachine: Creating Disk image...
	I1216 03:48:11.010120   10858 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:48:11.010374   10858 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I1216 03:48:11.020729   10858 main.go:141] libmachine: STDOUT: 
	I1216 03:48:11.020749   10858 main.go:141] libmachine: STDERR: 
	I1216 03:48:11.020817   10858 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2 +20000M
	I1216 03:48:11.029330   10858 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:48:11.029348   10858 main.go:141] libmachine: STDERR: 
	I1216 03:48:11.029360   10858 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I1216 03:48:11.029365   10858 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:48:11.029373   10858 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:11.029410   10858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:0a:4b:32:e4:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I1216 03:48:11.031221   10858 main.go:141] libmachine: STDOUT: 
	I1216 03:48:11.031236   10858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:11.031248   10858 client.go:171] duration metric: took 387.259625ms to LocalClient.Create
	I1216 03:48:13.033384   10858 start.go:128] duration metric: took 2.443248542s to createHost
	I1216 03:48:13.033426   10858 start.go:83] releasing machines lock for "old-k8s-version-424000", held for 2.443662375s
	W1216 03:48:13.033785   10858 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:13.048508   10858 out.go:201] 
	W1216 03:48:13.053523   10858 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:13.053553   10858 out.go:270] * 
	* 
	W1216 03:48:13.056579   10858 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:48:13.068506   10858 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (72.710792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-424000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-424000 create -f testdata/busybox.yaml: exit status 1 (29.549125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-424000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-424000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (33.697583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (33.522375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-424000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-424000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-424000 describe deploy/metrics-server -n kube-system: exit status 1 (27.5485ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-424000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-424000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (34.408667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.202781875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-424000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-424000" primary control-plane node in "old-k8s-version-424000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-424000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-424000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:17.231994   10908 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:17.232152   10908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:17.232155   10908 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:17.232158   10908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:17.232299   10908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:17.233446   10908 out.go:352] Setting JSON to false
	I1216 03:48:17.251122   10908 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6468,"bootTime":1734343229,"procs":568,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:48:17.251194   10908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:48:17.256383   10908 out.go:177] * [old-k8s-version-424000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:48:17.265366   10908 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:48:17.265428   10908 notify.go:220] Checking for updates...
	I1216 03:48:17.272339   10908 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:48:17.276415   10908 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:48:17.279307   10908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:48:17.282352   10908 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:48:17.285381   10908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:48:17.288554   10908 config.go:182] Loaded profile config "old-k8s-version-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1216 03:48:17.292278   10908 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1216 03:48:17.295369   10908 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:48:17.299352   10908 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:48:17.306329   10908 start.go:297] selected driver: qemu2
	I1216 03:48:17.306338   10908 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:17.306399   10908 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:48:17.309082   10908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:48:17.309105   10908 cni.go:84] Creating CNI manager for ""
	I1216 03:48:17.309128   10908 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 03:48:17.309148   10908 start.go:340] cluster config:
	{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:17.313726   10908 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:17.322351   10908 out.go:177] * Starting "old-k8s-version-424000" primary control-plane node in "old-k8s-version-424000" cluster
	I1216 03:48:17.326397   10908 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:48:17.326410   10908 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 03:48:17.326418   10908 cache.go:56] Caching tarball of preloaded images
	I1216 03:48:17.326488   10908 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:48:17.326493   10908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 03:48:17.326540   10908 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/old-k8s-version-424000/config.json ...
	I1216 03:48:17.327115   10908 start.go:360] acquireMachinesLock for old-k8s-version-424000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:17.327147   10908 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "old-k8s-version-424000"
	I1216 03:48:17.327154   10908 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:48:17.327160   10908 fix.go:54] fixHost starting: 
	I1216 03:48:17.327277   10908 fix.go:112] recreateIfNeeded on old-k8s-version-424000: state=Stopped err=<nil>
	W1216 03:48:17.327285   10908 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:48:17.332209   10908 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-424000" ...
	I1216 03:48:17.340357   10908 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:17.340403   10908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:0a:4b:32:e4:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I1216 03:48:17.342570   10908 main.go:141] libmachine: STDOUT: 
	I1216 03:48:17.342590   10908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:17.342618   10908 fix.go:56] duration metric: took 15.457917ms for fixHost
	I1216 03:48:17.342624   10908 start.go:83] releasing machines lock for "old-k8s-version-424000", held for 15.472833ms
	W1216 03:48:17.342629   10908 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:17.342669   10908 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:17.342673   10908 start.go:729] Will try again in 5 seconds ...
	I1216 03:48:22.344738   10908 start.go:360] acquireMachinesLock for old-k8s-version-424000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:22.345329   10908 start.go:364] duration metric: took 448.792µs to acquireMachinesLock for "old-k8s-version-424000"
	I1216 03:48:22.345533   10908 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:48:22.345553   10908 fix.go:54] fixHost starting: 
	I1216 03:48:22.346359   10908 fix.go:112] recreateIfNeeded on old-k8s-version-424000: state=Stopped err=<nil>
	W1216 03:48:22.346386   10908 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:48:22.353720   10908 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-424000" ...
	I1216 03:48:22.357679   10908 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:22.357919   10908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:0a:4b:32:e4:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I1216 03:48:22.367522   10908 main.go:141] libmachine: STDOUT: 
	I1216 03:48:22.367569   10908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:22.367628   10908 fix.go:56] duration metric: took 22.077125ms for fixHost
	I1216 03:48:22.367645   10908 start.go:83] releasing machines lock for "old-k8s-version-424000", held for 22.227875ms
	W1216 03:48:22.367845   10908 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:22.376636   10908 out.go:201] 
	W1216 03:48:22.379827   10908 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:22.379850   10908 out.go:270] * 
	* 
	W1216 03:48:22.382494   10908 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:48:22.389829   10908 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (72.4415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-424000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (36.00875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-424000" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-424000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-424000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.23375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-424000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-424000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (34.337041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-424000 image list --format=json
start_stop_delete_test.go:302: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (34.038042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-424000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-424000 --alsologtostderr -v=1: exit status 83 (46.689459ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-424000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:22.687940   10927 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:22.688378   10927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:22.688381   10927 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:22.688383   10927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:22.688534   10927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:22.688742   10927 out.go:352] Setting JSON to false
	I1216 03:48:22.688749   10927 mustload.go:65] Loading cluster: old-k8s-version-424000
	I1216 03:48:22.688958   10927 config.go:182] Loaded profile config "old-k8s-version-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1216 03:48:22.693762   10927 out.go:177] * The control-plane node old-k8s-version-424000 host is not running: state=Stopped
	I1216 03:48:22.697738   10927 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-424000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p old-k8s-version-424000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (33.758667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (34.037375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.874383791s)

                                                
                                                
-- stdout --
	* [no-preload-766000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-766000" primary control-plane node in "no-preload-766000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-766000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:23.032267   10944 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:23.032414   10944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:23.032419   10944 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:23.032421   10944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:23.032555   10944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:23.033751   10944 out.go:352] Setting JSON to false
	I1216 03:48:23.051446   10944 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6474,"bootTime":1734343229,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:48:23.051519   10944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:48:23.056766   10944 out.go:177] * [no-preload-766000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:48:23.063771   10944 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:48:23.063865   10944 notify.go:220] Checking for updates...
	I1216 03:48:23.070744   10944 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:48:23.073783   10944 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:48:23.076724   10944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:48:23.079751   10944 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:48:23.082772   10944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:48:23.086081   10944 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:23.086143   10944 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:23.086189   10944 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:48:23.090712   10944 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:48:23.097680   10944 start.go:297] selected driver: qemu2
	I1216 03:48:23.097686   10944 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:48:23.097694   10944 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:48:23.100204   10944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:48:23.103711   10944 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:48:23.107807   10944 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:48:23.107822   10944 cni.go:84] Creating CNI manager for ""
	I1216 03:48:23.107843   10944 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:48:23.107864   10944 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:48:23.107896   10944 start.go:340] cluster config:
	{Name:no-preload-766000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:23.112571   10944 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.120751   10944 out.go:177] * Starting "no-preload-766000" primary control-plane node in "no-preload-766000" cluster
	I1216 03:48:23.124827   10944 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:48:23.124925   10944 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/no-preload-766000/config.json ...
	I1216 03:48:23.124951   10944 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/no-preload-766000/config.json: {Name:mk90f5584c05c189a37f8a5fafac6284836422b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:48:23.125120   10944 cache.go:107] acquiring lock: {Name:mk50deb330e5a9b7d6546e45873df2651ccdc66e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.125120   10944 cache.go:107] acquiring lock: {Name:mk1cb263c9c793f30766f3b7ed4d7b6ba0c7608a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.125140   10944 cache.go:107] acquiring lock: {Name:mk357f4807c800ee42844cc7c6c335cbacb7001d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.125137   10944 cache.go:107] acquiring lock: {Name:mk08c17976bfda97586777f2b753b990cebe6437 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.125190   10944 cache.go:107] acquiring lock: {Name:mkdbc706df22038e9f088e54cfaae344153eed3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.125119   10944 cache.go:107] acquiring lock: {Name:mkec6aeff458f86725ac0491f24a17ab3f8437d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.125200   10944 cache.go:107] acquiring lock: {Name:mkcaa5c40db1faf1b4103e1ba316277703814f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.125118   10944 cache.go:107] acquiring lock: {Name:mk43146b652c427a15299fa1b6267909fa626be4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:23.125598   10944 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1216 03:48:23.125637   10944 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1216 03:48:23.125664   10944 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 03:48:23.125681   10944 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1216 03:48:23.125717   10944 cache.go:115] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 03:48:23.125735   10944 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 786.584µs
	I1216 03:48:23.125896   10944 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1216 03:48:23.125906   10944 start.go:360] acquireMachinesLock for no-preload-766000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:23.125948   10944 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1216 03:48:23.125962   10944 start.go:364] duration metric: took 50.375µs to acquireMachinesLock for "no-preload-766000"
	I1216 03:48:23.125984   10944 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1216 03:48:23.125974   10944 start.go:93] Provisioning new machine with config: &{Name:no-preload-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:48:23.126015   10944 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:48:23.126030   10944 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 03:48:23.132704   10944 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:48:23.136377   10944 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1216 03:48:23.136397   10944 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1216 03:48:23.136423   10944 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1216 03:48:23.137056   10944 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1216 03:48:23.137140   10944 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 03:48:23.137213   10944 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1216 03:48:23.138915   10944 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1216 03:48:23.150749   10944 start.go:159] libmachine.API.Create for "no-preload-766000" (driver="qemu2")
	I1216 03:48:23.150775   10944 client.go:168] LocalClient.Create starting
	I1216 03:48:23.150886   10944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:48:23.150925   10944 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:23.150943   10944 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:23.150983   10944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:48:23.151014   10944 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:23.151023   10944 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:23.151410   10944 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:48:23.327609   10944 main.go:141] libmachine: Creating SSH key...
	I1216 03:48:23.435592   10944 main.go:141] libmachine: Creating Disk image...
	I1216 03:48:23.435621   10944 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:48:23.435865   10944 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2
	I1216 03:48:23.446070   10944 main.go:141] libmachine: STDOUT: 
	I1216 03:48:23.446101   10944 main.go:141] libmachine: STDERR: 
	I1216 03:48:23.446181   10944 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2 +20000M
	I1216 03:48:23.456074   10944 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:48:23.456100   10944 main.go:141] libmachine: STDERR: 
	I1216 03:48:23.456128   10944 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2
	I1216 03:48:23.456132   10944 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:48:23.456146   10944 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:23.456179   10944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:4c:03:36:41:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2
	I1216 03:48:23.458681   10944 main.go:141] libmachine: STDOUT: 
	I1216 03:48:23.458706   10944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:23.458731   10944 client.go:171] duration metric: took 307.955416ms to LocalClient.Create
	I1216 03:48:23.670287   10944 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1216 03:48:23.670331   10944 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1216 03:48:23.674396   10944 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1216 03:48:23.714138   10944 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1216 03:48:23.744955   10944 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1216 03:48:23.832025   10944 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1216 03:48:23.856999   10944 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1216 03:48:23.857024   10944 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 732.041958ms
	I1216 03:48:23.857041   10944 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1216 03:48:23.913489   10944 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1216 03:48:25.458936   10944 start.go:128] duration metric: took 2.33293225s to createHost
	I1216 03:48:25.459000   10944 start.go:83] releasing machines lock for "no-preload-766000", held for 2.33307175s
	W1216 03:48:25.459058   10944 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:25.474040   10944 out.go:177] * Deleting "no-preload-766000" in qemu2 ...
	W1216 03:48:25.509394   10944 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:25.509421   10944 start.go:729] Will try again in 5 seconds ...
	I1216 03:48:26.608695   10944 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1216 03:48:26.608753   10944 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 3.483784125s
	I1216 03:48:26.608776   10944 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1216 03:48:27.219182   10944 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1216 03:48:27.219235   10944 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.094165958s
	I1216 03:48:27.219263   10944 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1216 03:48:28.296470   10944 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1216 03:48:28.296530   10944 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 5.171677167s
	I1216 03:48:28.296563   10944 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1216 03:48:29.123673   10944 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1216 03:48:29.123723   10944 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 5.998771417s
	I1216 03:48:29.123787   10944 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1216 03:48:29.190067   10944 cache.go:157] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1216 03:48:29.190109   10944 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 6.06526975s
	I1216 03:48:29.190134   10944 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1216 03:48:30.509562   10944 start.go:360] acquireMachinesLock for no-preload-766000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:30.510100   10944 start.go:364] duration metric: took 463.708µs to acquireMachinesLock for "no-preload-766000"
	I1216 03:48:30.510236   10944 start.go:93] Provisioning new machine with config: &{Name:no-preload-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:48:30.510483   10944 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:48:30.516977   10944 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:48:30.567091   10944 start.go:159] libmachine.API.Create for "no-preload-766000" (driver="qemu2")
	I1216 03:48:30.567150   10944 client.go:168] LocalClient.Create starting
	I1216 03:48:30.567338   10944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:48:30.567440   10944 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:30.567461   10944 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:30.567534   10944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:48:30.567594   10944 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:30.567612   10944 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:30.568208   10944 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:48:30.751099   10944 main.go:141] libmachine: Creating SSH key...
	I1216 03:48:30.814895   10944 main.go:141] libmachine: Creating Disk image...
	I1216 03:48:30.814901   10944 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:48:30.815127   10944 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2
	I1216 03:48:30.825241   10944 main.go:141] libmachine: STDOUT: 
	I1216 03:48:30.825273   10944 main.go:141] libmachine: STDERR: 
	I1216 03:48:30.825341   10944 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2 +20000M
	I1216 03:48:30.834157   10944 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:48:30.834176   10944 main.go:141] libmachine: STDERR: 
	I1216 03:48:30.834194   10944 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2
	I1216 03:48:30.834201   10944 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:48:30.834209   10944 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:30.834250   10944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:bb:e5:c6:fe:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2
	I1216 03:48:30.836354   10944 main.go:141] libmachine: STDOUT: 
	I1216 03:48:30.836375   10944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:30.836390   10944 client.go:171] duration metric: took 269.221542ms to LocalClient.Create
	I1216 03:48:32.836981   10944 start.go:128] duration metric: took 2.32645975s to createHost
	I1216 03:48:32.837081   10944 start.go:83] releasing machines lock for "no-preload-766000", held for 2.327002584s
	W1216 03:48:32.837417   10944 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:32.845588   10944 out.go:201] 
	W1216 03:48:32.849682   10944 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:32.849717   10944 out.go:270] * 
	* 
	W1216 03:48:32.852311   10944 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:48:32.860604   10944 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (71.571833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-766000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-766000 create -f testdata/busybox.yaml: exit status 1 (30.076166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-766000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-766000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (33.918542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (33.377167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-766000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-766000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-766000 describe deploy/metrics-server -n kube-system: exit status 1 (27.028542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-766000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-766000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (34.103875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.189759917s)

                                                
                                                
-- stdout --
	* [no-preload-766000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-766000" primary control-plane node in "no-preload-766000" cluster
	* Restarting existing qemu2 VM for "no-preload-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:36.469414   11022 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:36.469558   11022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:36.469561   11022 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:36.469563   11022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:36.469689   11022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:36.470764   11022 out.go:352] Setting JSON to false
	I1216 03:48:36.488506   11022 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6487,"bootTime":1734343229,"procs":568,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:48:36.488577   11022 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:48:36.492975   11022 out.go:177] * [no-preload-766000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:48:36.500883   11022 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:48:36.500951   11022 notify.go:220] Checking for updates...
	I1216 03:48:36.508058   11022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:48:36.510976   11022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:48:36.513974   11022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:48:36.517006   11022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:48:36.518472   11022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:48:36.522342   11022 config.go:182] Loaded profile config "no-preload-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:36.522615   11022 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:48:36.525963   11022 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:48:36.531016   11022 start.go:297] selected driver: qemu2
	I1216 03:48:36.531024   11022 start.go:901] validating driver "qemu2" against &{Name:no-preload-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:36.531079   11022 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:48:36.533584   11022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:48:36.533609   11022 cni.go:84] Creating CNI manager for ""
	I1216 03:48:36.533632   11022 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:48:36.533672   11022 start.go:340] cluster config:
	{Name:no-preload-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-766000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:36.538051   11022 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.546952   11022 out.go:177] * Starting "no-preload-766000" primary control-plane node in "no-preload-766000" cluster
	I1216 03:48:36.551002   11022 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:48:36.551076   11022 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/no-preload-766000/config.json ...
	I1216 03:48:36.551113   11022 cache.go:107] acquiring lock: {Name:mk43146b652c427a15299fa1b6267909fa626be4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.551113   11022 cache.go:107] acquiring lock: {Name:mkcaa5c40db1faf1b4103e1ba316277703814f9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.551152   11022 cache.go:107] acquiring lock: {Name:mk1cb263c9c793f30766f3b7ed4d7b6ba0c7608a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.551200   11022 cache.go:115] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 03:48:36.551205   11022 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.208µs
	I1216 03:48:36.551212   11022 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 03:48:36.551207   11022 cache.go:115] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1216 03:48:36.551220   11022 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 121.208µs
	I1216 03:48:36.551213   11022 cache.go:107] acquiring lock: {Name:mkec6aeff458f86725ac0491f24a17ab3f8437d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.551230   11022 cache.go:107] acquiring lock: {Name:mk357f4807c800ee42844cc7c6c335cbacb7001d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.551256   11022 cache.go:107] acquiring lock: {Name:mkdbc706df22038e9f088e54cfaae344153eed3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.551275   11022 cache.go:107] acquiring lock: {Name:mk50deb330e5a9b7d6546e45873df2651ccdc66e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.551289   11022 cache.go:115] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1216 03:48:36.551287   11022 cache.go:115] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1216 03:48:36.551310   11022 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 80.542µs
	I1216 03:48:36.551316   11022 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1216 03:48:36.551225   11022 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1216 03:48:36.551292   11022 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1216 03:48:36.551339   11022 cache.go:107] acquiring lock: {Name:mk08c17976bfda97586777f2b753b990cebe6437 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:36.551328   11022 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 215.125µs
	I1216 03:48:36.551370   11022 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1216 03:48:36.551387   11022 cache.go:115] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1216 03:48:36.551395   11022 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 159.292µs
	I1216 03:48:36.551405   11022 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1216 03:48:36.551414   11022 cache.go:115] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1216 03:48:36.551419   11022 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 99µs
	I1216 03:48:36.551423   11022 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1216 03:48:36.551438   11022 cache.go:115] /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1216 03:48:36.551448   11022 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 186.083µs
	I1216 03:48:36.551452   11022 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1216 03:48:36.551577   11022 start.go:360] acquireMachinesLock for no-preload-766000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:36.551610   11022 start.go:364] duration metric: took 27.209µs to acquireMachinesLock for "no-preload-766000"
	I1216 03:48:36.551618   11022 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:48:36.551624   11022 fix.go:54] fixHost starting: 
	I1216 03:48:36.551742   11022 fix.go:112] recreateIfNeeded on no-preload-766000: state=Stopped err=<nil>
	W1216 03:48:36.551748   11022 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:48:36.559965   11022 out.go:177] * Restarting existing qemu2 VM for "no-preload-766000" ...
	I1216 03:48:36.563944   11022 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:36.563981   11022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:bb:e5:c6:fe:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2
	I1216 03:48:36.564453   11022 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1216 03:48:36.566449   11022 main.go:141] libmachine: STDOUT: 
	I1216 03:48:36.566477   11022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:36.566503   11022 fix.go:56] duration metric: took 14.878542ms for fixHost
	I1216 03:48:36.566507   11022 start.go:83] releasing machines lock for "no-preload-766000", held for 14.8935ms
	W1216 03:48:36.566514   11022 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:36.566546   11022 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:36.566551   11022 start.go:729] Will try again in 5 seconds ...
	I1216 03:48:36.975940   11022 cache.go:162] opening:  /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1216 03:48:41.566756   11022 start.go:360] acquireMachinesLock for no-preload-766000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:41.567160   11022 start.go:364] duration metric: took 320.083µs to acquireMachinesLock for "no-preload-766000"
	I1216 03:48:41.567297   11022 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:48:41.567319   11022 fix.go:54] fixHost starting: 
	I1216 03:48:41.567981   11022 fix.go:112] recreateIfNeeded on no-preload-766000: state=Stopped err=<nil>
	W1216 03:48:41.568011   11022 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:48:41.572926   11022 out.go:177] * Restarting existing qemu2 VM for "no-preload-766000" ...
	I1216 03:48:41.579542   11022 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:41.579739   11022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:bb:e5:c6:fe:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/no-preload-766000/disk.qcow2
	I1216 03:48:41.590539   11022 main.go:141] libmachine: STDOUT: 
	I1216 03:48:41.590623   11022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:41.590723   11022 fix.go:56] duration metric: took 23.406625ms for fixHost
	I1216 03:48:41.590743   11022 start.go:83] releasing machines lock for "no-preload-766000", held for 23.562583ms
	W1216 03:48:41.590959   11022 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:41.597988   11022 out.go:201] 
	W1216 03:48:41.601011   11022 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:41.601036   11022 out.go:270] * 
	* 
	W1216 03:48:41.603612   11022 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:48:41.613895   11022 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (70.908708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-766000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (36.43325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-766000" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-766000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-766000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.133958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-766000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-766000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (34.838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-766000 image list --format=json
start_stop_delete_test.go:302: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (34.33875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-766000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-766000 --alsologtostderr -v=1: exit status 83 (43.221459ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-766000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-766000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:41.909744   11047 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:41.909926   11047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:41.909930   11047 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:41.909932   11047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:41.910053   11047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:41.910262   11047 out.go:352] Setting JSON to false
	I1216 03:48:41.910269   11047 mustload.go:65] Loading cluster: no-preload-766000
	I1216 03:48:41.910490   11047 config.go:182] Loaded profile config "no-preload-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:41.912316   11047 out.go:177] * The control-plane node no-preload-766000 host is not running: state=Stopped
	I1216 03:48:41.916168   11047 out.go:177]   To start a cluster, run: "minikube start -p no-preload-766000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p no-preload-766000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (33.677333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (34.138292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-092000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-092000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.966441166s)

                                                
                                                
-- stdout --
	* [embed-certs-092000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-092000" primary control-plane node in "embed-certs-092000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-092000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:42.253382   11064 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:42.253551   11064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:42.253554   11064 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:42.253557   11064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:42.253696   11064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:42.254922   11064 out.go:352] Setting JSON to false
	I1216 03:48:42.272907   11064 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6493,"bootTime":1734343229,"procs":568,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:48:42.272985   11064 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:48:42.278215   11064 out.go:177] * [embed-certs-092000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:48:42.286183   11064 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:48:42.286267   11064 notify.go:220] Checking for updates...
	I1216 03:48:42.301114   11064 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:48:42.304138   11064 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:48:42.307039   11064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:48:42.310116   11064 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:48:42.313137   11064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:48:42.314723   11064 config.go:182] Loaded profile config "cert-expiration-219000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:42.314794   11064 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:42.314847   11064 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:48:42.318144   11064 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:48:42.325001   11064 start.go:297] selected driver: qemu2
	I1216 03:48:42.325007   11064 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:48:42.325013   11064 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:48:42.327627   11064 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:48:42.331053   11064 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:48:42.335258   11064 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:48:42.335281   11064 cni.go:84] Creating CNI manager for ""
	I1216 03:48:42.335302   11064 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:48:42.335311   11064 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:48:42.335352   11064 start.go:340] cluster config:
	{Name:embed-certs-092000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:42.340191   11064 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:42.348180   11064 out.go:177] * Starting "embed-certs-092000" primary control-plane node in "embed-certs-092000" cluster
	I1216 03:48:42.351961   11064 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:48:42.351977   11064 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:48:42.351989   11064 cache.go:56] Caching tarball of preloaded images
	I1216 03:48:42.352068   11064 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:48:42.352074   11064 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:48:42.352144   11064 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/embed-certs-092000/config.json ...
	I1216 03:48:42.352155   11064 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/embed-certs-092000/config.json: {Name:mk34d589163e868627d6e32d30b8305dbac8250d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:48:42.352621   11064 start.go:360] acquireMachinesLock for embed-certs-092000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:42.352672   11064 start.go:364] duration metric: took 45.333µs to acquireMachinesLock for "embed-certs-092000"
	I1216 03:48:42.352685   11064 start.go:93] Provisioning new machine with config: &{Name:embed-certs-092000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:48:42.352718   11064 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:48:42.360925   11064 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:48:42.379647   11064 start.go:159] libmachine.API.Create for "embed-certs-092000" (driver="qemu2")
	I1216 03:48:42.379676   11064 client.go:168] LocalClient.Create starting
	I1216 03:48:42.379763   11064 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:48:42.379809   11064 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:42.379820   11064 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:42.379859   11064 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:48:42.379892   11064 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:42.379904   11064 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:42.380421   11064 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:48:42.554304   11064 main.go:141] libmachine: Creating SSH key...
	I1216 03:48:42.691077   11064 main.go:141] libmachine: Creating Disk image...
	I1216 03:48:42.691084   11064 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:48:42.691325   11064 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2
	I1216 03:48:42.701790   11064 main.go:141] libmachine: STDOUT: 
	I1216 03:48:42.701811   11064 main.go:141] libmachine: STDERR: 
	I1216 03:48:42.701876   11064 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2 +20000M
	I1216 03:48:42.710442   11064 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:48:42.710456   11064 main.go:141] libmachine: STDERR: 
	I1216 03:48:42.710475   11064 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2
	I1216 03:48:42.710479   11064 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:48:42.710491   11064 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:42.710518   11064 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:41:0d:6d:7c:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2
	I1216 03:48:42.712356   11064 main.go:141] libmachine: STDOUT: 
	I1216 03:48:42.712369   11064 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:42.712389   11064 client.go:171] duration metric: took 332.713292ms to LocalClient.Create
	I1216 03:48:44.714523   11064 start.go:128] duration metric: took 2.361826s to createHost
	I1216 03:48:44.714590   11064 start.go:83] releasing machines lock for "embed-certs-092000", held for 2.361951209s
	W1216 03:48:44.714684   11064 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:44.730938   11064 out.go:177] * Deleting "embed-certs-092000" in qemu2 ...
	W1216 03:48:44.761368   11064 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:44.761391   11064 start.go:729] Will try again in 5 seconds ...
	I1216 03:48:49.763608   11064 start.go:360] acquireMachinesLock for embed-certs-092000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:49.764208   11064 start.go:364] duration metric: took 474.083µs to acquireMachinesLock for "embed-certs-092000"
	I1216 03:48:49.764333   11064 start.go:93] Provisioning new machine with config: &{Name:embed-certs-092000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:48:49.764598   11064 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:48:49.785513   11064 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:48:49.833314   11064 start.go:159] libmachine.API.Create for "embed-certs-092000" (driver="qemu2")
	I1216 03:48:49.833364   11064 client.go:168] LocalClient.Create starting
	I1216 03:48:49.833493   11064 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:48:49.833587   11064 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:49.833603   11064 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:49.833669   11064 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:48:49.833725   11064 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:49.833740   11064 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:49.834595   11064 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:48:50.018872   11064 main.go:141] libmachine: Creating SSH key...
	I1216 03:48:50.114806   11064 main.go:141] libmachine: Creating Disk image...
	I1216 03:48:50.114812   11064 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:48:50.115036   11064 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2
	I1216 03:48:50.125288   11064 main.go:141] libmachine: STDOUT: 
	I1216 03:48:50.125311   11064 main.go:141] libmachine: STDERR: 
	I1216 03:48:50.125377   11064 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2 +20000M
	I1216 03:48:50.133954   11064 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:48:50.133969   11064 main.go:141] libmachine: STDERR: 
	I1216 03:48:50.133980   11064 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2
	I1216 03:48:50.133984   11064 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:48:50.134001   11064 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:50.134031   11064 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:9d:bf:99:89:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2
	I1216 03:48:50.135827   11064 main.go:141] libmachine: STDOUT: 
	I1216 03:48:50.135840   11064 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:50.135852   11064 client.go:171] duration metric: took 302.488542ms to LocalClient.Create
	I1216 03:48:52.138096   11064 start.go:128] duration metric: took 2.373507292s to createHost
	I1216 03:48:52.138167   11064 start.go:83] releasing machines lock for "embed-certs-092000", held for 2.373978792s
	W1216 03:48:52.138593   11064 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-092000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-092000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:52.152068   11064 out.go:201] 
	W1216 03:48:52.158632   11064 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:52.158669   11064 out.go:270] * 
	* 
	W1216 03:48:52.161296   11064 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:48:52.170092   11064 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-092000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (67.029917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-092000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context embed-certs-092000 create -f testdata/busybox.yaml: exit status 1 (30.260959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context embed-certs-092000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (33.472166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (33.192042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-092000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-092000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-092000 describe deploy/metrics-server -n kube-system: exit status 1 (27.738833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-092000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (33.981875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-092000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-092000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.196063625s)

                                                
                                                
-- stdout --
	* [embed-certs-092000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-092000" primary control-plane node in "embed-certs-092000" cluster
	* Restarting existing qemu2 VM for "embed-certs-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:56.291772   11122 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:56.291917   11122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:56.291922   11122 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:56.291925   11122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:56.292050   11122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:56.293127   11122 out.go:352] Setting JSON to false
	I1216 03:48:56.310854   11122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6507,"bootTime":1734343229,"procs":569,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:48:56.310925   11122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:48:56.314566   11122 out.go:177] * [embed-certs-092000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:48:56.320427   11122 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:48:56.320467   11122 notify.go:220] Checking for updates...
	I1216 03:48:56.326385   11122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:48:56.329469   11122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:48:56.332447   11122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:48:56.333836   11122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:48:56.336405   11122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:48:56.339792   11122 config.go:182] Loaded profile config "embed-certs-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:56.340078   11122 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:48:56.341832   11122 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:48:56.349459   11122 start.go:297] selected driver: qemu2
	I1216 03:48:56.349466   11122 start.go:901] validating driver "qemu2" against &{Name:embed-certs-092000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:56.349527   11122 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:48:56.351964   11122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:48:56.351989   11122 cni.go:84] Creating CNI manager for ""
	I1216 03:48:56.352010   11122 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:48:56.352033   11122 start.go:340] cluster config:
	{Name:embed-certs-092000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-092000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:56.356323   11122 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:56.364376   11122 out.go:177] * Starting "embed-certs-092000" primary control-plane node in "embed-certs-092000" cluster
	I1216 03:48:56.368523   11122 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:48:56.368543   11122 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:48:56.368556   11122 cache.go:56] Caching tarball of preloaded images
	I1216 03:48:56.368630   11122 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:48:56.368636   11122 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:48:56.368698   11122 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/embed-certs-092000/config.json ...
	I1216 03:48:56.369188   11122 start.go:360] acquireMachinesLock for embed-certs-092000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:56.369220   11122 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "embed-certs-092000"
	I1216 03:48:56.369228   11122 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:48:56.369234   11122 fix.go:54] fixHost starting: 
	I1216 03:48:56.369361   11122 fix.go:112] recreateIfNeeded on embed-certs-092000: state=Stopped err=<nil>
	W1216 03:48:56.369368   11122 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:48:56.377413   11122 out.go:177] * Restarting existing qemu2 VM for "embed-certs-092000" ...
	I1216 03:48:56.381370   11122 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:56.381413   11122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:9d:bf:99:89:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2
	I1216 03:48:56.383681   11122 main.go:141] libmachine: STDOUT: 
	I1216 03:48:56.383703   11122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:56.383737   11122 fix.go:56] duration metric: took 14.502375ms for fixHost
	I1216 03:48:56.383742   11122 start.go:83] releasing machines lock for "embed-certs-092000", held for 14.517375ms
	W1216 03:48:56.383747   11122 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:48:56.383786   11122 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:48:56.383791   11122 start.go:729] Will try again in 5 seconds ...
	I1216 03:49:01.385945   11122 start.go:360] acquireMachinesLock for embed-certs-092000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:49:01.386512   11122 start.go:364] duration metric: took 432.083µs to acquireMachinesLock for "embed-certs-092000"
	I1216 03:49:01.386629   11122 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:49:01.386652   11122 fix.go:54] fixHost starting: 
	I1216 03:49:01.387510   11122 fix.go:112] recreateIfNeeded on embed-certs-092000: state=Stopped err=<nil>
	W1216 03:49:01.387536   11122 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:49:01.408165   11122 out.go:177] * Restarting existing qemu2 VM for "embed-certs-092000" ...
	I1216 03:49:01.412893   11122 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:49:01.413163   11122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:9d:bf:99:89:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/embed-certs-092000/disk.qcow2
	I1216 03:49:01.424490   11122 main.go:141] libmachine: STDOUT: 
	I1216 03:49:01.424561   11122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:49:01.424692   11122 fix.go:56] duration metric: took 38.044ms for fixHost
	I1216 03:49:01.424714   11122 start.go:83] releasing machines lock for "embed-certs-092000", held for 38.180375ms
	W1216 03:49:01.424894   11122 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:01.431739   11122 out.go:201] 
	W1216 03:49:01.435054   11122 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:49:01.435111   11122 out.go:270] * 
	* 
	W1216 03:49:01.437351   11122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:49:01.445037   11122 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-092000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (75.3545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-352000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-352000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.953934s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-352000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-352000" primary control-plane node in "default-k8s-diff-port-352000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-352000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:48:57.539922   11142 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:48:57.540083   11142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:57.540086   11142 out.go:358] Setting ErrFile to fd 2...
	I1216 03:48:57.540088   11142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:48:57.540228   11142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:48:57.541415   11142 out.go:352] Setting JSON to false
	I1216 03:48:57.559129   11142 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6508,"bootTime":1734343229,"procs":567,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:48:57.559195   11142 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:48:57.563178   11142 out.go:177] * [default-k8s-diff-port-352000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:48:57.571074   11142 notify.go:220] Checking for updates...
	I1216 03:48:57.575157   11142 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:48:57.582081   11142 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:48:57.588125   11142 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:48:57.592102   11142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:48:57.595110   11142 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:48:57.603121   11142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:48:57.607501   11142 config.go:182] Loaded profile config "embed-certs-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:57.607566   11142 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:48:57.607613   11142 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:48:57.609213   11142 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:48:57.616184   11142 start.go:297] selected driver: qemu2
	I1216 03:48:57.616190   11142 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:48:57.616195   11142 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:48:57.618810   11142 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:48:57.622075   11142 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:48:57.626195   11142 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:48:57.626212   11142 cni.go:84] Creating CNI manager for ""
	I1216 03:48:57.626237   11142 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:48:57.626242   11142 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:48:57.626279   11142 start.go:340] cluster config:
	{Name:default-k8s-diff-port-352000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-352000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:48:57.631376   11142 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:48:57.639123   11142 out.go:177] * Starting "default-k8s-diff-port-352000" primary control-plane node in "default-k8s-diff-port-352000" cluster
	I1216 03:48:57.643113   11142 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:48:57.643133   11142 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:48:57.643145   11142 cache.go:56] Caching tarball of preloaded images
	I1216 03:48:57.643230   11142 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:48:57.643236   11142 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:48:57.643297   11142 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/default-k8s-diff-port-352000/config.json ...
	I1216 03:48:57.643308   11142 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/default-k8s-diff-port-352000/config.json: {Name:mk375d3050d2d60797cc3cfd5eb452ce0b546d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:48:57.643611   11142 start.go:360] acquireMachinesLock for default-k8s-diff-port-352000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:48:57.643665   11142 start.go:364] duration metric: took 44.75µs to acquireMachinesLock for "default-k8s-diff-port-352000"
	I1216 03:48:57.643677   11142 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-352000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:48:57.643723   11142 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:48:57.648111   11142 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:48:57.666708   11142 start.go:159] libmachine.API.Create for "default-k8s-diff-port-352000" (driver="qemu2")
	I1216 03:48:57.666732   11142 client.go:168] LocalClient.Create starting
	I1216 03:48:57.666795   11142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:48:57.666838   11142 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:57.666855   11142 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:57.666895   11142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:48:57.666926   11142 main.go:141] libmachine: Decoding PEM data...
	I1216 03:48:57.666934   11142 main.go:141] libmachine: Parsing certificate...
	I1216 03:48:57.667318   11142 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:48:57.828327   11142 main.go:141] libmachine: Creating SSH key...
	I1216 03:48:58.023766   11142 main.go:141] libmachine: Creating Disk image...
	I1216 03:48:58.023773   11142 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:48:58.024027   11142 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2
	I1216 03:48:58.034505   11142 main.go:141] libmachine: STDOUT: 
	I1216 03:48:58.034520   11142 main.go:141] libmachine: STDERR: 
	I1216 03:48:58.034595   11142 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2 +20000M
	I1216 03:48:58.043368   11142 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:48:58.043381   11142 main.go:141] libmachine: STDERR: 
	I1216 03:48:58.043394   11142 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2
	I1216 03:48:58.043410   11142 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:48:58.043421   11142 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:48:58.043454   11142 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:48:19:a2:87:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2
	I1216 03:48:58.045299   11142 main.go:141] libmachine: STDOUT: 
	I1216 03:48:58.045311   11142 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:48:58.045328   11142 client.go:171] duration metric: took 378.598584ms to LocalClient.Create
	I1216 03:49:00.047485   11142 start.go:128] duration metric: took 2.403779459s to createHost
	I1216 03:49:00.047560   11142 start.go:83] releasing machines lock for "default-k8s-diff-port-352000", held for 2.403930209s
	W1216 03:49:00.047611   11142 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:00.062172   11142 out.go:177] * Deleting "default-k8s-diff-port-352000" in qemu2 ...
	W1216 03:49:00.091241   11142 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:00.091269   11142 start.go:729] Will try again in 5 seconds ...
	I1216 03:49:05.093407   11142 start.go:360] acquireMachinesLock for default-k8s-diff-port-352000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:49:05.093999   11142 start.go:364] duration metric: took 472.833µs to acquireMachinesLock for "default-k8s-diff-port-352000"
	I1216 03:49:05.094135   11142 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-352000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:49:05.094416   11142 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:49:05.103933   11142 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:49:05.152168   11142 start.go:159] libmachine.API.Create for "default-k8s-diff-port-352000" (driver="qemu2")
	I1216 03:49:05.152224   11142 client.go:168] LocalClient.Create starting
	I1216 03:49:05.152356   11142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:49:05.152433   11142 main.go:141] libmachine: Decoding PEM data...
	I1216 03:49:05.152456   11142 main.go:141] libmachine: Parsing certificate...
	I1216 03:49:05.152535   11142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:49:05.152604   11142 main.go:141] libmachine: Decoding PEM data...
	I1216 03:49:05.152616   11142 main.go:141] libmachine: Parsing certificate...
	I1216 03:49:05.157459   11142 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:49:05.334448   11142 main.go:141] libmachine: Creating SSH key...
	I1216 03:49:05.397016   11142 main.go:141] libmachine: Creating Disk image...
	I1216 03:49:05.397022   11142 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:49:05.397246   11142 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2
	I1216 03:49:05.407122   11142 main.go:141] libmachine: STDOUT: 
	I1216 03:49:05.407141   11142 main.go:141] libmachine: STDERR: 
	I1216 03:49:05.407199   11142 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2 +20000M
	I1216 03:49:05.415698   11142 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:49:05.415712   11142 main.go:141] libmachine: STDERR: 
	I1216 03:49:05.415722   11142 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2
	I1216 03:49:05.415727   11142 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:49:05.415744   11142 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:49:05.415771   11142 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:dd:d0:18:14:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2
	I1216 03:49:05.417679   11142 main.go:141] libmachine: STDOUT: 
	I1216 03:49:05.417693   11142 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:49:05.417706   11142 client.go:171] duration metric: took 265.482709ms to LocalClient.Create
	I1216 03:49:07.419909   11142 start.go:128] duration metric: took 2.325412166s to createHost
	I1216 03:49:07.419977   11142 start.go:83] releasing machines lock for "default-k8s-diff-port-352000", held for 2.325997167s
	W1216 03:49:07.420345   11142 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-352000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-352000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:07.428922   11142 out.go:201] 
	W1216 03:49:07.438940   11142 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:49:07.438964   11142 out.go:270] * 
	* 
	W1216 03:49:07.441814   11142 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:49:07.448900   11142 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-352000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (69.901125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-092000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (36.297292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-092000" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context embed-certs-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.552083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (33.757375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-092000 image list --format=json
start_stop_delete_test.go:302: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (33.166625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-092000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-092000 --alsologtostderr -v=1: exit status 83 (46.666583ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-092000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-092000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:49:01.740780   11167 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:49:01.740970   11167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:01.740974   11167 out.go:358] Setting ErrFile to fd 2...
	I1216 03:49:01.740976   11167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:01.741120   11167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:49:01.741340   11167 out.go:352] Setting JSON to false
	I1216 03:49:01.741348   11167 mustload.go:65] Loading cluster: embed-certs-092000
	I1216 03:49:01.741547   11167 config.go:182] Loaded profile config "embed-certs-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:49:01.745686   11167 out.go:177] * The control-plane node embed-certs-092000 host is not running: state=Stopped
	I1216 03:49:01.749597   11167 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-092000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p embed-certs-092000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (33.283584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (33.419708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-540000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-540000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.068654916s)

                                                
                                                
-- stdout --
	* [newest-cni-540000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-540000" primary control-plane node in "newest-cni-540000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-540000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:49:02.085129   11184 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:49:02.085298   11184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:02.085302   11184 out.go:358] Setting ErrFile to fd 2...
	I1216 03:49:02.085304   11184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:02.085426   11184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:49:02.086642   11184 out.go:352] Setting JSON to false
	I1216 03:49:02.104265   11184 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6513,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:49:02.104333   11184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:49:02.109678   11184 out.go:177] * [newest-cni-540000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:49:02.116606   11184 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:49:02.116669   11184 notify.go:220] Checking for updates...
	I1216 03:49:02.124531   11184 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:49:02.127667   11184 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:49:02.130631   11184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:49:02.133588   11184 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:49:02.136618   11184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:49:02.139904   11184 config.go:182] Loaded profile config "default-k8s-diff-port-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:49:02.139971   11184 config.go:182] Loaded profile config "multinode-791000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:49:02.140024   11184 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:49:02.143570   11184 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 03:49:02.150635   11184 start.go:297] selected driver: qemu2
	I1216 03:49:02.150641   11184 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:49:02.150649   11184 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:49:02.153128   11184 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1216 03:49:02.153178   11184 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 03:49:02.160531   11184 out.go:177] * Automatically selected the socket_vmnet network
	I1216 03:49:02.163707   11184 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 03:49:02.163726   11184 cni.go:84] Creating CNI manager for ""
	I1216 03:49:02.163757   11184 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:49:02.163762   11184 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:49:02.163793   11184 start.go:340] cluster config:
	{Name:newest-cni-540000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:49:02.168516   11184 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:49:02.176561   11184 out.go:177] * Starting "newest-cni-540000" primary control-plane node in "newest-cni-540000" cluster
	I1216 03:49:02.180553   11184 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:49:02.180572   11184 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:49:02.180583   11184 cache.go:56] Caching tarball of preloaded images
	I1216 03:49:02.180683   11184 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:49:02.180689   11184 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:49:02.180755   11184 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/newest-cni-540000/config.json ...
	I1216 03:49:02.180767   11184 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/newest-cni-540000/config.json: {Name:mka20d4d2fa1ddfc53dbb6f654c73611aad9e5dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:49:02.181229   11184 start.go:360] acquireMachinesLock for newest-cni-540000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:49:02.181281   11184 start.go:364] duration metric: took 45.833µs to acquireMachinesLock for "newest-cni-540000"
	I1216 03:49:02.181293   11184 start.go:93] Provisioning new machine with config: &{Name:newest-cni-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:49:02.181322   11184 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:49:02.185652   11184 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:49:02.203522   11184 start.go:159] libmachine.API.Create for "newest-cni-540000" (driver="qemu2")
	I1216 03:49:02.203557   11184 client.go:168] LocalClient.Create starting
	I1216 03:49:02.203635   11184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:49:02.203673   11184 main.go:141] libmachine: Decoding PEM data...
	I1216 03:49:02.203685   11184 main.go:141] libmachine: Parsing certificate...
	I1216 03:49:02.203726   11184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:49:02.203757   11184 main.go:141] libmachine: Decoding PEM data...
	I1216 03:49:02.203767   11184 main.go:141] libmachine: Parsing certificate...
	I1216 03:49:02.204220   11184 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:49:02.367412   11184 main.go:141] libmachine: Creating SSH key...
	I1216 03:49:02.626480   11184 main.go:141] libmachine: Creating Disk image...
	I1216 03:49:02.626495   11184 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:49:02.626760   11184 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2
	I1216 03:49:02.637657   11184 main.go:141] libmachine: STDOUT: 
	I1216 03:49:02.637685   11184 main.go:141] libmachine: STDERR: 
	I1216 03:49:02.637748   11184 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2 +20000M
	I1216 03:49:02.646767   11184 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:49:02.646782   11184 main.go:141] libmachine: STDERR: 
	I1216 03:49:02.646803   11184 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2
	I1216 03:49:02.646808   11184 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:49:02.646821   11184 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:49:02.646848   11184 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:12:15:f1:89:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2
	I1216 03:49:02.648636   11184 main.go:141] libmachine: STDOUT: 
	I1216 03:49:02.648648   11184 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:49:02.648669   11184 client.go:171] duration metric: took 445.114417ms to LocalClient.Create
	I1216 03:49:04.650823   11184 start.go:128] duration metric: took 2.469518375s to createHost
	I1216 03:49:04.650878   11184 start.go:83] releasing machines lock for "newest-cni-540000", held for 2.469632708s
	W1216 03:49:04.650936   11184 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:04.662331   11184 out.go:177] * Deleting "newest-cni-540000" in qemu2 ...
	W1216 03:49:04.693556   11184 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:04.693580   11184 start.go:729] Will try again in 5 seconds ...
	I1216 03:49:09.695633   11184 start.go:360] acquireMachinesLock for newest-cni-540000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:49:09.695887   11184 start.go:364] duration metric: took 192.583µs to acquireMachinesLock for "newest-cni-540000"
	I1216 03:49:09.695926   11184 start.go:93] Provisioning new machine with config: &{Name:newest-cni-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 03:49:09.696126   11184 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 03:49:09.704478   11184 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 03:49:09.753092   11184 start.go:159] libmachine.API.Create for "newest-cni-540000" (driver="qemu2")
	I1216 03:49:09.753151   11184 client.go:168] LocalClient.Create starting
	I1216 03:49:09.753257   11184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/ca.pem
	I1216 03:49:09.753321   11184 main.go:141] libmachine: Decoding PEM data...
	I1216 03:49:09.753338   11184 main.go:141] libmachine: Parsing certificate...
	I1216 03:49:09.753403   11184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20107-6737/.minikube/certs/cert.pem
	I1216 03:49:09.753435   11184 main.go:141] libmachine: Decoding PEM data...
	I1216 03:49:09.753450   11184 main.go:141] libmachine: Parsing certificate...
	I1216 03:49:09.754078   11184 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1216 03:49:09.956635   11184 main.go:141] libmachine: Creating SSH key...
	I1216 03:49:10.054377   11184 main.go:141] libmachine: Creating Disk image...
	I1216 03:49:10.054389   11184 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 03:49:10.054605   11184 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2
	I1216 03:49:10.064247   11184 main.go:141] libmachine: STDOUT: 
	I1216 03:49:10.064273   11184 main.go:141] libmachine: STDERR: 
	I1216 03:49:10.064336   11184 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2 +20000M
	I1216 03:49:10.072787   11184 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 03:49:10.072804   11184 main.go:141] libmachine: STDERR: 
	I1216 03:49:10.072820   11184 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2
	I1216 03:49:10.072835   11184 main.go:141] libmachine: Starting QEMU VM...
	I1216 03:49:10.072846   11184 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:49:10.072885   11184 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:a4:22:73:5a:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2
	I1216 03:49:10.074725   11184 main.go:141] libmachine: STDOUT: 
	I1216 03:49:10.074741   11184 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:49:10.074758   11184 client.go:171] duration metric: took 321.606334ms to LocalClient.Create
	I1216 03:49:12.075718   11184 start.go:128] duration metric: took 2.379608209s to createHost
	I1216 03:49:12.075785   11184 start.go:83] releasing machines lock for "newest-cni-540000", held for 2.379926083s
	W1216 03:49:12.076135   11184 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:12.084798   11184 out.go:201] 
	W1216 03:49:12.094847   11184 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:49:12.094895   11184 out.go:270] * 
	* 
	W1216 03:49:12.097491   11184 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:49:12.109790   11184 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-540000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000: exit status 7 (66.978583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-352000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-352000 create -f testdata/busybox.yaml: exit status 1 (29.790875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-352000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context default-k8s-diff-port-352000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (33.7705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (33.371667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-352000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-352000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-352000 describe deploy/metrics-server -n kube-system: exit status 1 (27.474458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-352000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-352000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (33.144042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-352000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-352000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (6.020675667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-352000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-352000" primary control-plane node in "default-k8s-diff-port-352000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-352000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-352000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:49:11.184535   11236 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:49:11.184689   11236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:11.184693   11236 out.go:358] Setting ErrFile to fd 2...
	I1216 03:49:11.184695   11236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:11.184819   11236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:49:11.185907   11236 out.go:352] Setting JSON to false
	I1216 03:49:11.203741   11236 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6522,"bootTime":1734343229,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:49:11.203815   11236 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:49:11.207634   11236 out.go:177] * [default-k8s-diff-port-352000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:49:11.214588   11236 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:49:11.214647   11236 notify.go:220] Checking for updates...
	I1216 03:49:11.222407   11236 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:49:11.226591   11236 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:49:11.229554   11236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:49:11.231019   11236 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:49:11.234598   11236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:49:11.237936   11236 config.go:182] Loaded profile config "default-k8s-diff-port-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:49:11.238203   11236 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:49:11.239953   11236 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:49:11.246586   11236 start.go:297] selected driver: qemu2
	I1216 03:49:11.246594   11236 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-352000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:49:11.246650   11236 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:49:11.249201   11236 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:49:11.249230   11236 cni.go:84] Creating CNI manager for ""
	I1216 03:49:11.249251   11236 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:49:11.249274   11236 start.go:340] cluster config:
	{Name:default-k8s-diff-port-352000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-352000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:49:11.253681   11236 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:49:11.261531   11236 out.go:177] * Starting "default-k8s-diff-port-352000" primary control-plane node in "default-k8s-diff-port-352000" cluster
	I1216 03:49:11.263362   11236 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:49:11.263378   11236 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:49:11.263389   11236 cache.go:56] Caching tarball of preloaded images
	I1216 03:49:11.263486   11236 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:49:11.263491   11236 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:49:11.263545   11236 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/default-k8s-diff-port-352000/config.json ...
	I1216 03:49:11.263973   11236 start.go:360] acquireMachinesLock for default-k8s-diff-port-352000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:49:12.075931   11236 start.go:364] duration metric: took 811.953167ms to acquireMachinesLock for "default-k8s-diff-port-352000"
	I1216 03:49:12.076124   11236 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:49:12.076167   11236 fix.go:54] fixHost starting: 
	I1216 03:49:12.076841   11236 fix.go:112] recreateIfNeeded on default-k8s-diff-port-352000: state=Stopped err=<nil>
	W1216 03:49:12.076886   11236 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:49:12.091845   11236 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-352000" ...
	I1216 03:49:12.098807   11236 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:49:12.098991   11236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:dd:d0:18:14:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2
	I1216 03:49:12.110001   11236 main.go:141] libmachine: STDOUT: 
	I1216 03:49:12.110060   11236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:49:12.110213   11236 fix.go:56] duration metric: took 34.049625ms for fixHost
	I1216 03:49:12.110237   11236 start.go:83] releasing machines lock for "default-k8s-diff-port-352000", held for 34.24075ms
	W1216 03:49:12.110261   11236 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:49:12.110450   11236 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:12.110466   11236 start.go:729] Will try again in 5 seconds ...
	I1216 03:49:17.112601   11236 start.go:360] acquireMachinesLock for default-k8s-diff-port-352000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:49:17.113020   11236 start.go:364] duration metric: took 319.792µs to acquireMachinesLock for "default-k8s-diff-port-352000"
	I1216 03:49:17.113145   11236 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:49:17.113163   11236 fix.go:54] fixHost starting: 
	I1216 03:49:17.113924   11236 fix.go:112] recreateIfNeeded on default-k8s-diff-port-352000: state=Stopped err=<nil>
	W1216 03:49:17.113954   11236 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:49:17.123277   11236 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-352000" ...
	I1216 03:49:17.127346   11236 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:49:17.127572   11236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:dd:d0:18:14:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/default-k8s-diff-port-352000/disk.qcow2
	I1216 03:49:17.137367   11236 main.go:141] libmachine: STDOUT: 
	I1216 03:49:17.137418   11236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:49:17.137490   11236 fix.go:56] duration metric: took 24.326291ms for fixHost
	I1216 03:49:17.137511   11236 start.go:83] releasing machines lock for "default-k8s-diff-port-352000", held for 24.470291ms
	W1216 03:49:17.137649   11236 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-352000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-352000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:17.146284   11236 out.go:201] 
	W1216 03:49:17.149458   11236 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:49:17.149501   11236 out.go:270] * 
	* 
	W1216 03:49:17.152007   11236 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:49:17.160272   11236 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-352000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (71.674667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-540000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-540000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.194580708s)

                                                
                                                
-- stdout --
	* [newest-cni-540000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-540000" primary control-plane node in "newest-cni-540000" cluster
	* Restarting existing qemu2 VM for "newest-cni-540000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-540000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:49:14.552402   11263 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:49:14.552563   11263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:14.552566   11263 out.go:358] Setting ErrFile to fd 2...
	I1216 03:49:14.552569   11263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:14.552700   11263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:49:14.553815   11263 out.go:352] Setting JSON to false
	I1216 03:49:14.571467   11263 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6525,"bootTime":1734343229,"procs":567,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:49:14.571543   11263 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:49:14.576122   11263 out.go:177] * [newest-cni-540000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:49:14.584055   11263 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:49:14.584123   11263 notify.go:220] Checking for updates...
	I1216 03:49:14.591068   11263 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:49:14.594051   11263 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:49:14.597070   11263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:49:14.600009   11263 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:49:14.603063   11263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:49:14.606348   11263 config.go:182] Loaded profile config "newest-cni-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:49:14.606619   11263 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:49:14.610016   11263 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:49:14.615957   11263 start.go:297] selected driver: qemu2
	I1216 03:49:14.615971   11263 start.go:901] validating driver "qemu2" against &{Name:newest-cni-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:49:14.616037   11263 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:49:14.618523   11263 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 03:49:14.618548   11263 cni.go:84] Creating CNI manager for ""
	I1216 03:49:14.618567   11263 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:49:14.618591   11263 start.go:340] cluster config:
	{Name:newest-cni-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-540000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:49:14.622995   11263 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:49:14.631014   11263 out.go:177] * Starting "newest-cni-540000" primary control-plane node in "newest-cni-540000" cluster
	I1216 03:49:14.634034   11263 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:49:14.634049   11263 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:49:14.634064   11263 cache.go:56] Caching tarball of preloaded images
	I1216 03:49:14.634115   11263 preload.go:172] Found /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:49:14.634121   11263 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1216 03:49:14.634185   11263 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/newest-cni-540000/config.json ...
	I1216 03:49:14.634724   11263 start.go:360] acquireMachinesLock for newest-cni-540000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:49:14.634753   11263 start.go:364] duration metric: took 22.959µs to acquireMachinesLock for "newest-cni-540000"
	I1216 03:49:14.634766   11263 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:49:14.634772   11263 fix.go:54] fixHost starting: 
	I1216 03:49:14.634887   11263 fix.go:112] recreateIfNeeded on newest-cni-540000: state=Stopped err=<nil>
	W1216 03:49:14.634895   11263 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:49:14.637958   11263 out.go:177] * Restarting existing qemu2 VM for "newest-cni-540000" ...
	I1216 03:49:14.650089   11263 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:49:14.650133   11263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:a4:22:73:5a:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2
	I1216 03:49:14.652309   11263 main.go:141] libmachine: STDOUT: 
	I1216 03:49:14.652327   11263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:49:14.652356   11263 fix.go:56] duration metric: took 17.584042ms for fixHost
	I1216 03:49:14.652362   11263 start.go:83] releasing machines lock for "newest-cni-540000", held for 17.604917ms
	W1216 03:49:14.652367   11263 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:49:14.652407   11263 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:14.652411   11263 start.go:729] Will try again in 5 seconds ...
	I1216 03:49:19.654591   11263 start.go:360] acquireMachinesLock for newest-cni-540000: {Name:mkc8fddba826003227789917a94b04c8e8640b9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:49:19.655207   11263 start.go:364] duration metric: took 505.583µs to acquireMachinesLock for "newest-cni-540000"
	I1216 03:49:19.655349   11263 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:49:19.655370   11263 fix.go:54] fixHost starting: 
	I1216 03:49:19.656184   11263 fix.go:112] recreateIfNeeded on newest-cni-540000: state=Stopped err=<nil>
	W1216 03:49:19.656212   11263 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:49:19.664550   11263 out.go:177] * Restarting existing qemu2 VM for "newest-cni-540000" ...
	I1216 03:49:19.668653   11263 qemu.go:418] Using hvf for hardware acceleration
	I1216 03:49:19.668885   11263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:a4:22:73:5a:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20107-6737/.minikube/machines/newest-cni-540000/disk.qcow2
	I1216 03:49:19.679573   11263 main.go:141] libmachine: STDOUT: 
	I1216 03:49:19.679636   11263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 03:49:19.679733   11263 fix.go:56] duration metric: took 24.3665ms for fixHost
	I1216 03:49:19.679753   11263 start.go:83] releasing machines lock for "newest-cni-540000", held for 24.523709ms
	W1216 03:49:19.679965   11263 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-540000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-540000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 03:49:19.688577   11263 out.go:201] 
	W1216 03:49:19.691633   11263 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 03:49:19.691659   11263 out.go:270] * 
	* 
	W1216 03:49:19.694580   11263 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:49:19.707537   11263 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-540000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000: exit status 7 (71.722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-352000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (35.976416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-352000" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-352000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-352000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.419167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-352000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-352000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (33.462708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-352000 image list --format=json
start_stop_delete_test.go:302: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (32.685292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-352000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-352000 --alsologtostderr -v=1: exit status 83 (44.661625ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-352000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-352000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:49:17.451849   11282 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:49:17.452049   11282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:17.452052   11282 out.go:358] Setting ErrFile to fd 2...
	I1216 03:49:17.452054   11282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:17.452190   11282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:49:17.452403   11282 out.go:352] Setting JSON to false
	I1216 03:49:17.452409   11282 mustload.go:65] Loading cluster: default-k8s-diff-port-352000
	I1216 03:49:17.452628   11282 config.go:182] Loaded profile config "default-k8s-diff-port-352000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:49:17.457126   11282 out.go:177] * The control-plane node default-k8s-diff-port-352000 host is not running: state=Stopped
	I1216 03:49:17.460063   11282 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-352000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-352000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (32.963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (33.494875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-352000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-540000 image list --format=json
start_stop_delete_test.go:302: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000: exit status 7 (33.96425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-540000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-540000 --alsologtostderr -v=1: exit status 83 (44.460709ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-540000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-540000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:49:19.901330   11306 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:49:19.901529   11306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:19.901533   11306 out.go:358] Setting ErrFile to fd 2...
	I1216 03:49:19.901535   11306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:19.901664   11306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:49:19.901900   11306 out.go:352] Setting JSON to false
	I1216 03:49:19.901910   11306 mustload.go:65] Loading cluster: newest-cni-540000
	I1216 03:49:19.902129   11306 config.go:182] Loaded profile config "newest-cni-540000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:49:19.906430   11306 out.go:177] * The control-plane node newest-cni-540000 host is not running: state=Stopped
	I1216 03:49:19.910618   11306 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-540000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p newest-cni-540000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000: exit status 7 (34.693042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-540000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000: exit status 7 (33.966333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-540000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.11
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 8.4
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
35 TestHyperKitDriverInstallOrUpdate 10.92
39 TestErrorSpam/start 0.41
40 TestErrorSpam/status 0.11
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 10.01
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 2.04
55 TestFunctional/serial/CacheCmd/cache/add_local 1.01
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.25
71 TestFunctional/parallel/DryRun 0.29
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.24
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.96
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.11
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 2.71
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.22
193 TestMainNoArgs 0.04
238 TestStoppedBinaryUpgrade/Setup 1.24
240 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
258 TestNoKubernetes/serial/ProfileList 0.14
259 TestNoKubernetes/serial/Stop 3.58
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
275 TestStartStop/group/old-k8s-version/serial/Stop 3.69
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 3.13
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
297 TestStartStop/group/embed-certs/serial/Stop 3.65
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.27
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/newest-cni/serial/DeployApp 0
314 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
315 TestStartStop/group/newest-cni/serial/Stop 2.14
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1216 03:23:51.595463    7256 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1216 03:23:51.595867    7256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-259000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-259000: exit status 85 (107.107542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |          |
	|         | -p download-only-259000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 03:23:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:23:30.313448    7257 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:23:30.313647    7257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:23:30.313651    7257 out.go:358] Setting ErrFile to fd 2...
	I1216 03:23:30.313653    7257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:23:30.313788    7257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	W1216 03:23:30.313867    7257 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20107-6737/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20107-6737/.minikube/config/config.json: no such file or directory
	I1216 03:23:30.315304    7257 out.go:352] Setting JSON to true
	I1216 03:23:30.333806    7257 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4981,"bootTime":1734343229,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:23:30.333876    7257 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:23:30.339244    7257 out.go:97] [download-only-259000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:23:30.339433    7257 notify.go:220] Checking for updates...
	W1216 03:23:30.339463    7257 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 03:23:30.343029    7257 out.go:169] MINIKUBE_LOCATION=20107
	I1216 03:23:30.346205    7257 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:23:30.351230    7257 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:23:30.354115    7257 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:23:30.358264    7257 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	W1216 03:23:30.364131    7257 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 03:23:30.364326    7257 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:23:30.367141    7257 out.go:97] Using the qemu2 driver based on user configuration
	I1216 03:23:30.367160    7257 start.go:297] selected driver: qemu2
	I1216 03:23:30.367175    7257 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:23:30.367259    7257 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:23:30.370190    7257 out.go:169] Automatically selected the socket_vmnet network
	I1216 03:23:30.375717    7257 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1216 03:23:30.375879    7257 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 03:23:30.375911    7257 cni.go:84] Creating CNI manager for ""
	I1216 03:23:30.375965    7257 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 03:23:30.376027    7257 start.go:340] cluster config:
	{Name:download-only-259000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-259000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:23:30.380842    7257 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:23:30.385183    7257 out.go:97] Downloading VM boot image ...
	I1216 03:23:30.385205    7257 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso
	I1216 03:23:38.102723    7257 out.go:97] Starting "download-only-259000" primary control-plane node in "download-only-259000" cluster
	I1216 03:23:38.102757    7257 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:23:38.158350    7257 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 03:23:38.158375    7257 cache.go:56] Caching tarball of preloaded images
	I1216 03:23:38.158602    7257 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:23:38.162705    7257 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1216 03:23:38.162712    7257 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 03:23:38.243555    7257 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 03:23:50.223524    7257 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 03:23:50.223698    7257 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 03:23:50.919429    7257 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 03:23:50.919631    7257 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/download-only-259000/config.json ...
	I1216 03:23:50.919651    7257 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20107-6737/.minikube/profiles/download-only-259000/config.json: {Name:mk00e1d48f911675fb7532254ccf0baee4d79f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:23:50.919970    7257 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 03:23:50.920220    7257 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1216 03:23:51.542176    7257 out.go:193] 
	W1216 03:23:51.549349    7257 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20107-6737/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109650600 0x109650600 0x109650600 0x109650600 0x109650600 0x109650600 0x109650600] Decompressors:map[bz2:0x14000717250 gz:0x14000717258 tar:0x14000717200 tar.bz2:0x14000717210 tar.gz:0x14000717220 tar.xz:0x14000717230 tar.zst:0x14000717240 tbz2:0x14000717210 tgz:0x14000717220 txz:0x14000717230 tzst:0x14000717240 xz:0x14000717260 zip:0x14000717280 zst:0x14000717268] Getters:map[file:0x140017e6580 http:0x140005f81e0 https:0x140005f8230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1216 03:23:51.549375    7257 out_reason.go:110] 
	W1216 03:23:51.557171    7257 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:23:51.561205    7257 out.go:193] 
	
	
	* The control-plane node download-only-259000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-259000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-259000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (8.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-503000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-503000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (8.402501458s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (8.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1216 03:24:00.388241    7256 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1216 03:24:00.388286    7256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-503000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-503000: exit status 85 (83.101ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |                     |
	|         | -p download-only-259000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Dec 24 03:23 PST | 16 Dec 24 03:23 PST |
	| delete  | -p download-only-259000        | download-only-259000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST | 16 Dec 24 03:23 PST |
	| start   | -o=json --download-only        | download-only-503000 | jenkins | v1.34.0 | 16 Dec 24 03:23 PST |                     |
	|         | -p download-only-503000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 03:23:52
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:23:52.017712    7300 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:23:52.017868    7300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:23:52.017871    7300 out.go:358] Setting ErrFile to fd 2...
	I1216 03:23:52.017874    7300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:23:52.018019    7300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:23:52.019208    7300 out.go:352] Setting JSON to true
	I1216 03:23:52.036998    7300 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5003,"bootTime":1734343229,"procs":575,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:23:52.037082    7300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:23:52.040470    7300 out.go:97] [download-only-503000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:23:52.040608    7300 notify.go:220] Checking for updates...
	I1216 03:23:52.044536    7300 out.go:169] MINIKUBE_LOCATION=20107
	I1216 03:23:52.047530    7300 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:23:52.050492    7300 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:23:52.053531    7300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:23:52.056533    7300 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	W1216 03:23:52.062454    7300 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 03:23:52.062612    7300 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:23:52.065502    7300 out.go:97] Using the qemu2 driver based on user configuration
	I1216 03:23:52.065511    7300 start.go:297] selected driver: qemu2
	I1216 03:23:52.065515    7300 start.go:901] validating driver "qemu2" against <nil>
	I1216 03:23:52.065562    7300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 03:23:52.068473    7300 out.go:169] Automatically selected the socket_vmnet network
	I1216 03:23:52.073772    7300 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1216 03:23:52.073859    7300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 03:23:52.073876    7300 cni.go:84] Creating CNI manager for ""
	I1216 03:23:52.073906    7300 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 03:23:52.073911    7300 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 03:23:52.073959    7300 start.go:340] cluster config:
	{Name:download-only-503000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:23:52.078309    7300 iso.go:125] acquiring lock: {Name:mka2caf09120daf1ef838df613f61313c0b11f54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:23:52.081516    7300 out.go:97] Starting "download-only-503000" primary control-plane node in "download-only-503000" cluster
	I1216 03:23:52.081525    7300 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:23:52.138682    7300 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1216 03:23:52.138704    7300 cache.go:56] Caching tarball of preloaded images
	I1216 03:23:52.138907    7300 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1216 03:23:52.143946    7300 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1216 03:23:52.143954    7300 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1216 03:23:52.222233    7300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/20107-6737/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-503000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-503000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-503000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 03:24:00.923020    7256 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-381000 --alsologtostderr --binary-mirror http://127.0.0.1:60797 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-381000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-381000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-215000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-215000: exit status 85 (62.996292ms)

                                                
                                                
-- stdout --
	* Profile "addons-215000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-215000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-215000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-215000: exit status 85 (66.770417ms)

                                                
                                                
-- stdout --
	* Profile "addons-215000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-215000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.92s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1216 03:45:27.986296    7256 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 03:45:27.986481    7256 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- PASS: TestHyperKitDriverInstallOrUpdate (10.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status: exit status 7 (36.145958ms)

                                                
                                                
-- stdout --
	nospam-451000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status: exit status 7 (35.50825ms)

                                                
                                                
-- stdout --
	nospam-451000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status: exit status 7 (35.108375ms)

                                                
                                                
-- stdout --
	nospam-451000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.11s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause: exit status 83 (45.245084ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-451000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-451000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause: exit status 83 (43.993542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-451000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-451000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause: exit status 83 (44.783917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-451000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-451000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause: exit status 83 (44.481958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-451000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-451000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause: exit status 83 (45.084625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-451000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-451000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause: exit status 83 (44.806416ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-451000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-451000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (10.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 stop: (3.317561625s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 stop: (3.174030375s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-451000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-451000 stop: (3.519877833s)
--- PASS: TestErrorSpam/stop (10.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20107-6737/.minikube/files/etc/test/nested/copy/7256/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1353459233/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cache add minikube-local-cache-test:functional-648000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 cache delete minikube-local-cache-test:functional-648000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-648000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 config get cpus: exit status 14 (35.235583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 config get cpus: exit status 14 (33.91175ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-648000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-648000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (170.305625ms)

                                                
                                                
-- stdout --
	* [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:25:38.220354    7910 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:25:38.220540    7910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.220544    7910 out.go:358] Setting ErrFile to fd 2...
	I1216 03:25:38.220548    7910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.220724    7910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:25:38.222112    7910 out.go:352] Setting JSON to false
	I1216 03:25:38.243171    7910 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5109,"bootTime":1734343229,"procs":588,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:25:38.243239    7910 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:25:38.248072    7910 out.go:177] * [functional-648000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 03:25:38.256010    7910 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:25:38.256079    7910 notify.go:220] Checking for updates...
	I1216 03:25:38.262998    7910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:25:38.266064    7910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:25:38.270053    7910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:25:38.272961    7910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:25:38.276050    7910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:25:38.279390    7910 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:25:38.279712    7910 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:25:38.282901    7910 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 03:25:38.290006    7910 start.go:297] selected driver: qemu2
	I1216 03:25:38.290014    7910 start.go:901] validating driver "qemu2" against &{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:25:38.290062    7910 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:25:38.297118    7910 out.go:201] 
	W1216 03:25:38.301016    7910 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 03:25:38.305000    7910 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-648000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-648000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-648000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.869333ms)

                                                
                                                
-- stdout --
	* [functional-648000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:25:38.466110    7921 out.go:345] Setting OutFile to fd 1 ...
	I1216 03:25:38.466273    7921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.466276    7921 out.go:358] Setting ErrFile to fd 2...
	I1216 03:25:38.466278    7921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:38.466406    7921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20107-6737/.minikube/bin
	I1216 03:25:38.467928    7921 out.go:352] Setting JSON to false
	I1216 03:25:38.486428    7921 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5109,"bootTime":1734343229,"procs":588,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 03:25:38.486507    7921 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 03:25:38.491059    7921 out.go:177] * [functional-648000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1216 03:25:38.498072    7921 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 03:25:38.498160    7921 notify.go:220] Checking for updates...
	I1216 03:25:38.506041    7921 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	I1216 03:25:38.507402    7921 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 03:25:38.510056    7921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:25:38.513095    7921 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	I1216 03:25:38.516061    7921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:25:38.519339    7921 config.go:182] Loaded profile config "functional-648000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1216 03:25:38.519634    7921 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 03:25:38.523079    7921 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1216 03:25:38.530040    7921 start.go:297] selected driver: qemu2
	I1216 03:25:38.530049    7921 start.go:901] validating driver "qemu2" against &{Name:functional-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:25:38.530116    7921 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:25:38.537049    7921 out.go:201] 
	W1216 03:25:38.540985    7921 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 03:25:38.545016    7921 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.925183s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-648000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image rm kicbase/echo-server:functional-648000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-648000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 image save --daemon kicbase/echo-server:functional-648000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-648000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "52.983541ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "38.026625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "59.259583ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "38.811834ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.014469708s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-648000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-648000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-648000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-648000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-736000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-736000 --output=json --user=testUser: (2.711707916s)
--- PASS: TestJSONOutput/stop/Command (2.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-110000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-110000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (105.944709ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"20e4b6fd-4d0c-40b4-9f10-a3013a79369f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-110000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"486d5622-5592-491d-815a-1fbf20d53659","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20107"}}
	{"specversion":"1.0","id":"8e32622e-0e23-4fb8-a1ec-6892760b9d56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig"}}
	{"specversion":"1.0","id":"e4fe9540-a8e3-483e-aca6-a4d87af127f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"81e38455-2bb0-4db8-857e-d729f478e16c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"377bb7c5-0bd0-4011-a6de-38a171946cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube"}}
	{"specversion":"1.0","id":"5fd49c3e-a4ca-441f-bb06-f494a047d02a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f26b928-58ee-46dc-822c-3c38032b8487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-110000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-110000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-873000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-850000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-850000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (108.763292ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20107-6737/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20107-6737/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-850000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-850000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (49.613292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-850000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-850000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-850000
W1216 03:45:29.970654    7256 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1216 03:45:29.970896    7256 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1216 03:45:29.970945    7256 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit
I1216 03:45:30.481718    7256 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x106d89900 0x106d89900 0x106d89900 0x106d89900 0x106d89900 0x106d89900 0x106d89900] Decompressors:map[bz2:0x14000519fc0 gz:0x14000519fc8 tar:0x14000519f50 tar.bz2:0x14000519f60 tar.gz:0x14000519f80 tar.xz:0x14000519f90 tar.zst:0x14000519fa0 tbz2:0x14000519f60 tgz:0x14000519f80 txz:0x14000519f90 tzst:0x14000519fa0 xz:0x14000519fd0 zip:0x14000519ff0 zst:0x14000519fd8] Getters:map[file:0x1400154bef0 http:0x14000b14320 https:0x14000b14370] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1216 03:45:30.481867    7256 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3762243856/001/docker-machine-driver-hyperkit
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-850000: (3.578257s)
--- PASS: TestNoKubernetes/serial/Stop (3.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-850000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-850000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (54.745209ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-850000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-850000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-424000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-424000 --alsologtostderr -v=3: (3.688543666s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (61.89325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-424000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-766000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-766000 --alsologtostderr -v=3: (3.131266s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-766000 -n no-preload-766000: exit status 7 (66.64975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-766000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-092000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-092000 --alsologtostderr -v=3: (3.654644541s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-092000 -n embed-certs-092000: exit status 7 (61.849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-092000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-352000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-352000 --alsologtostderr -v=3: (3.268223959s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-352000 -n default-k8s-diff-port-352000: exit status 7 (60.790083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-352000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-540000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-540000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-540000 --alsologtostderr -v=3: (2.137184292s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-540000 -n newest-cni-540000: exit status 7 (58.971667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-540000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port651080179/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734348304800494000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port651080179/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734348304800494000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port651080179/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734348304800494000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port651080179/001/test-1734348304800494000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.463ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:04.860037    7256 retry.go:31] will retry after 559.476802ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.270333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:05.509188    7256 retry.go:31] will retry after 996.922788ms: exit status 83
I1216 03:25:05.819729    7256 retry.go:31] will retry after 5.713366157s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.559708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:06.600131    7256 retry.go:31] will retry after 970.792005ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.617208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:07.663910    7256 retry.go:31] will retry after 2.058333456s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.661417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:09.817373    7256 retry.go:31] will retry after 2.876204798s: exit status 83
I1216 03:25:11.535396    7256 retry.go:31] will retry after 7.897508822s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.139833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:12.787040    7256 retry.go:31] will retry after 3.871731348s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.808791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo umount -f /mount-9p": exit status 83 (49.74325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-648000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port651080179/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1717823906/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (65.059416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:16.989638    7256 retry.go:31] will retry after 430.155758ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.24125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:17.513352    7256 retry.go:31] will retry after 1.118158832s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.020541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:18.724901    7256 retry.go:31] will retry after 1.407452643s: exit status 83
I1216 03:25:19.435136    7256 retry.go:31] will retry after 9.403185662s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.12825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:20.225895    7256 retry.go:31] will retry after 1.068713409s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.136708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:21.389219    7256 retry.go:31] will retry after 2.281561296s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.518292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:23.762812    7256 retry.go:31] will retry after 3.792807764s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.6475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "sudo umount -f /mount-9p": exit status 83 (51.627709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-648000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1717823906/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3924111287/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3924111287/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3924111287/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1: exit status 83 (86.649208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:27.915146    7256 retry.go:31] will retry after 661.104495ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1: exit status 83 (89.582041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:28.668224    7256 retry.go:31] will retry after 886.630031ms: exit status 83
I1216 03:25:28.840514    7256 retry.go:31] will retry after 12.169718134s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1: exit status 83 (92.026541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:29.649399    7256 retry.go:31] will retry after 917.657537ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1: exit status 83 (91.290333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:30.660656    7256 retry.go:31] will retry after 1.400692036s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1: exit status 83 (97.49ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:32.161151    7256 retry.go:31] will retry after 2.127659189s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1: exit status 83 (92.643125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
I1216 03:25:34.383749    7256 retry.go:31] will retry after 3.274709535s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-648000 ssh "findmnt -T" /mount1: exit status 83 (90.024709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-648000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-648000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3924111287/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3924111287/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-648000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3924111287/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.33s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-989000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-989000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-989000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-989000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989000"

                                                
                                                
----------------------- debugLogs end: cilium-989000 [took: 2.387811375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-989000
--- SKIP: TestNetworkPlugins/group/cilium (2.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-285000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-285000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard